Six-pack holographic imaging for dynamic rejection of out-of-focus objects

Six-pack holography is adapted to reject out-of-focus objects in dynamic samples, using a single camera exposure and without any scanning. By illuminating the sample from six different angles in parallel using a low-coherence source, out-of-focus objects are laterally shifted in six different directions when projected onto the focal plane. Then pixel-wise averaging of the six reconstructed images creates a significantly clearer image, with rejection of out-of-focus objects. Dynamic imaging results are shown for swimming microalgae and flowing microbeads, including numerical refocusing by Fresnel propagation. The averaged images reduced the contribution of out-of-focus objects by up to 83% in comparison to standard holograms captured using the same light source, further improving the system sectioning capabilities. Both simulation and experimental results are presented. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement


Introduction
Holography can capture the complex wave fronts of samples, thereby recording not only the sample amplitude but also the phase, thus enabling post-capture propagation to other axial distances by digital propagation [1]. This is achieved by interfering the coherent light that interacted with the sample with a clean reference beam that contains no sample modulation. In contrast to bright-field microscopy, however, where light is focused onto the sample by a condenser lens, in holographic microscopy a collimated beam of light (i.e. plane wave) is typically used to illuminate the sample. This is because illuminating with focused light (i.e. a spherical wave) would introduce a large curvature in the recorded phase, and the sample phase is typically much smaller than this phase curvature, and thus the sample information becomes difficult to reconstruct. However, illuminating with a collimated beam leads to a significant decrease in the numerical aperture (NA) of the optical system, and correspondingly increases the depth of field. This causes out-of-focus (OOF) objects in 3D samples to have a strong presence in the image, such that when an OOF object passes either behind or in front of the sample in the focal plane, it will have visible effects on both the amplitude and phase images.
Six-pack holography (6PH) is an off-axis holographic imaging technique that is able to record six complex wave fronts simultaneously, by projecting onto the digital camera six pairs of interfering beams, and rejecting unwanted interferences using coherence gating [2,3]. Since these complex wave fronts do not overlap in the spatial frequency domain, they can be fully reconstructed. 6PH is an optimized case of spatial holographic multiplexing or angular holographic multiplexing [4], which has previously been applied for various applications, including single-shot Jones matrix acquisition [5], recording fast events [6], acquiring fluorescence and hologram images in the same exposure [7], and recording multi-wavelength holograms in a single shot [8].
Previously, we have used 6PH to obtain super-resolution with synthetic aperture in simultaneous off-axis digital holographic imaging [3]. In the current paper, we adapt 6PH to obtain a new approach for rejection of OOF objects in off-axis holography by simultaneous imaging of six different perspective views in six holographic channels, thereby improving sectioning in the reconstructed dynamic complex wave fonts, regardless of the illumination source temporal or spatial coherence. Thus, even when using a low-coherence source with coherence gating, our method is able to further improve the system sectioning capabilities. Our technique is similar in principle to that of light field microscopy [9][10][11], though in 6PH the lateral resolution is not sacrificed for improved axial resolution by using microlens arrays, and complex wave fronts are captured rather than intensity images, enabling techniques such as numerical refocusing using Fresnel propagation [12], and holographic synthetic aperture [13]. In addition, the presented application does not require a sparse sample as is required in compressive holography [14][15][16]. Most importantly, the proposed technique is simultaneous and does not require scanning over time, as in other optical sectioning methods, such as confocal microscopy [17] or optical coherence tomography (OCT), either using a low temporalcoherence source [18,19] or a low spatial-coherence source in reflection mode [20].

Rejection of OOF objects with 6PH
In standard off-axis digital holography, the recorded hologram, containing linear fringes, is digitally Fourier transformed, and the cross-correlation (CC) term is cropped and inverse Fourier transformed to produce the complex wave front image from a single camera exposure. The linear parallel fringes in the hologram act as a carrier frequency for the complex wave front data, shifting the CC terms away from the center of the spatial frequency domain, where the DC term is located, as illustrated in Fig. 1(a). In 6PH, instead of a single pair of sample and reference beams, six pairs of sample and reference beams are used together. In the spatial frequency domain, this produces a pattern of six CC terms and their complex conjugates without overlapping any of these terms, as illustrated in Fig. 1(b), enabling the reconstruction of the six complex wave fronts contained in the six sample beams from a single camera exposure. To avoid unwanted interferences of nonmatching beams, coherence gating [3,[21][22][23] is used. This is done by using a partially coherent light source and introducing different optical path delays to each of the six beam pairs, such that each beam pair is delayed by at least the coherence length of the light source relative to the other five pairs, and thus only the matching sample and reference beam pairs will create off-axis interferences.
In order to reduce the presence of OOF objects using 6PH, one can illuminate the sample from six different off-axis angles around the optical axis. As illustrated in Fig. 2, each image produced using a different illumination angle provides a different perspective view of the 3D sample, with the position of in-focus objects remaining constant in all images, while the positions of OOF objects shift in accordance to their projections onto the focal plane. This shift is determined by the magnitude of the off-axis illumination angle, its direction along the sample plane, and the distance of the object from the focal plane. As shown in Fig. 2(a), the lateral shift distance d is equal to z tanθ, where z is the distance of the object from the focal plane, and θ is the off-axis illumination angle.  Assuming a 3D sample comprised of objects of roughly the same diameter, such as beads or a suspension of cells, ideally the distance d of the OOF object should be greater than the diameter of the object. In such a case, pixel-wise averaging of the six perspective images illustrated in Figs. 2(b) -(g) produces an image in which the in-focus object remains clearly visible, while the contribution of the OOF objects in the final image is decreased to one sixth of its previous value, as shown in Figs. 2(h) and (i). This is due to the fact that the projection images of the OOF objects are shifted in different directions in each amplitude image, and thus do not overlap in the averaged image.

Simulation results
First, to demonstrate the method and analyze its limitations, a computer simulation was implemented. The simulation was performed by systematic propagation of a plane wave through the model by increments of 65 nm along the angle of illumination, using the Fresnel propagation algorithm. Once the propagation through the model was complete, the resulting complex wave front image was refocused to the focal plane of z = 0. In this simulation, backscattering is not included; however, scattering along the beam path from multiple objects is included. The magnification, numerical aperture, wavelength, and pixel size used in the simulation were chosen to be identical to those of the optical system, presented later in the paper. Using this simulation, the effect of averaging the six amplitude images produced by the same six illumination angles to be used in the optical system was analyzed. We first analyzed a model of one object, and then analyzed our sectioning capability with two-object model. The model used for this simulation was an absorptive sphere with a diameter of 5.1 μm, as shown in Fig. 3(a). Figure 3(b) shows the single-channel amplitude image of the in-focus sphere, and Fig. 3(c) shows the single-channel amplitude image of the same sphere after moving it 5.1 μm out of focus, and illuminating it at an off-axis angle of 23.4°. Figure 3(d) shows the averaged image produced using all six amplitude images when using the same six illumination angles as in the 6PH optical system presented later.
In order to assess the averaging effect, the intensity differences of the images in Figs. 3(c) and (d) from the background value of 1.0 (white) were calculated. The image in Fig. 3(c) possessed a mean difference of 0.36, and a maximum difference of 0.88, while the image in Fig. 3(d) possessed a mean difference of 0.10, and a difference of 0.19 at the location of the maximum difference in Fig. 3(c). Based on these values, the averaged image reduces the mean contribution of the OOF object in this case by a factor of 3.6 and reduces the maximum contribution, that is, the darkest spot, by a factor of 4.6. While one might expect that the darkest spot would be reduced by a factor of 6, this is only true when there is no overlap between the six amplitude images. Assuming a sample containing multiple identical spheres such as the one in this simulation, the minimum z-distance between the centers of two spheres directly above one another, one in-focus and one OOF, is equal to the diameter of the spheres, 5.1 μm. Thus, the minimum reduction factor in this case is 4.6, while the maximum is 6, for objects further from focus. This points to an inherent limitation of this technique, which is that the minimum reduction factor is dependent on the dimensions of the objects in the sample. Non-spherical objects can be closer together and have larger contributions to the image in their x-y dimensions, leading to significant overlap between the amplitude images. This effect is illustrated in Visualization 1, which shows the averaged image of a sphere of 5.1 μm diameter as it moves out of focus. A non-spherical object may have averaged images similar to the spherical object shown in the video prior to reaching a z-distance of 5.1 μm. Amplitude image of (e) using on-axis illumination, as reconstructed from a single-channel hologram. (g) Averaged image of (e) using six amplitude images as reconstructed from a sixpack hologram, with the same six illumination angles as in the optical system presented later. (h) Synthetic aperture produced using the same six complex wave fronts used to create (g). Circles indicate the six different spatial frequency bands. (i) Synthetic aperture amplitude image produced using the synthetic aperture from (h). Scale bar in (i) applies to also to (f), (g).
Next, extension of the averaging technique by using synthetic aperture methods was tested, in which the shifted spatial frequencies acquired in each of the imaging channels were used to compose a larger spatial frequency content image [13]. For this purpose, the model that was used simulated an absorptive sphere 1.6 μm in diameter, placed 2.0 μm above a line pattern, similar to that of a single element from a 1951 USAF resolution target, comprised of 3 absorptive lines, each 1.95 μm wide, with 1.95 μm of empty space between them, as shown in Fig. 3(e). Figure 3(f) shows the single-channel image of this sample, using an on-axis illumination beam. Visibility of the pattern is 0.06 at the edges, with the middle of the central line being obscured completely by the OOF sphere. Figure 3(g) shows the averaged image using the same illumination angles as in the proposed optical system. In this case, the visibility of the pattern is increased to 0.12. Figure 3(h) shows the synthetic aperture constructed from the same six complex wave fronts (i.e. CC terms) used to create Fig. 3(g). In order to maintain the averaging effect, wherever the CC terms overlapped in the spatial frequency domain during the synthetic aperture construction, the average value of the overlapping frequencies was used. The resulting synthetic aperture amplitude image, shown in Fig. 3(i), possesses both the OOF object rejection of the averaged image, and the increased lateral resolution of a synthetic aperture image, with a visibility of 0.29 in this case. Visualization 2 illustrates the effects of the illumination angle on the averaged and synthetic aperture images. In this video, the arrangement of the beams around the optical axis is the same as in the optical system, except with all the offaxis illumination angles being equal and increasing.
Following this, the effect of the off-axis illumination angles on the averaged image was tested using a model similar to that of a simple biological cell. The model possessed a mildly absorptive ellipsoidal membrane and two identical absorptive spherical organelles, as shown in Fig. 4(a). The membrane had x and y diameters of 5.2 μm and a z diameter of 7.9 μm. The organelles had diameters of 1.6 μm, with one organelle being in-focus, and the second organelle being 2.6 μm above the focal plane.   d) we can see the effect of larger illumination angles on the effective depth of field and the OOF object removal process. In Fig. 4(b), at 0° illumination, the upper organelle has a strong presence in the image, while in Fig. 4(c) this has been improved by averaging using angles of 26.1°, with the edges of the cell being clearer and the cytoplasm being brighter. In Fig. 4(d), the averaged image produced using angles of 45° is even brighter. The mean values of the pixels in the area of the cell were measured for Figs. 4(b) -(d), and were determined to be 0.58, 0.66, and 0.73, respectively. These values indicate that the effective depth of field of the averaged image decreases as the off-axis angles of the illumination increases, as the higher illumination angles cause the layers of the model that are closer to the focal plane to undergo greater lateral shifts and be further averaged out. In addition, it is apparent that at 45° illumination from six angles, the image begins to suffer from a spatial frequency imbalance, resulting in six visibly more resolved (brighter) regions, along the three axes of illumination (as the six beams illuminate in pairs of opposing angles). In between these six higher-resolution regions, there are six less resolved (darker) regions, where the resolution remains closer to what is seen in Fig. 4(c). This effect is further illustrated in Visualization 3, where the averaged images using illumination angles from 0° to 45° are shown.
Finally, the effect of object concentration on the averaged image was tested using a 16.6×16.6×16.6 μm model composed of multiple absorptive spheres of 2 μm diameter, at various homogenous concentrations, with a single in-focus sphere always at the center of the model and the resulting averaged image. The simulation used the same six illumination angles as in the proposed system. At the maximum concentration of 182.89 pM, in which the objects touch each other, as shown in Fig. 5(a), both the single-channel amplitude image and the averaged image fail to enable the in-focus object to be distinguished, with the Michelson visibility of the in-focus object being 0.12 in the averaged image. At a lower concentration of 118.31 pM, shown in Fig. 5(b), the in-focus object begins to be visible in the averaged image, with a visibility of 0.28. At the concentration of 54.19 pM shown in Fig. 5(c), the in-focus object still cannot be distinguished in the single-channel amplitude image, yet it may be distinguished in the averaged image, with a visibility of 0.45. Finally, at the concentration of 11.70 pM shown in Fig. 5(d), the in-focus object is still obscured in the single-channel image, yet in the averaged image the in-focus object is highly visible, with a visibility of 0.63. The infocus object visibility in the averaged image continues to improve linearly as the concentration decreases, as illustrated in Fig. 5(e) and Visualization 4, with the maximum possible visibility for the in-focus object being 0.75.
Next, the effect of the averaging technique on the optical path delay (OPD) images produced by holography was tested. OPD is equal to the phase of the complex wave front image multiplied by the illumination wavelength and divided by 2π. In this simulation, a simple human sperm cell head model was used. The model, shown in Fig. 6(a), possessed an ellipsoidal membrane with a refractive index (RI) of 1.35 (indicated in cyan), cytoplasm of RI 1.38 (indicated in green), and a nucleus of RI 1.42 (indicated in red), with the surrounding medium having an RI of 1.33. The membrane had an x diameter of 5.2 μm, a y diameter of 3.5 μm, and a z diameter of 1.9 μm. The nucleus had an x diameter of 1.9 μm, a y diameter of 1.9 μm, and a z diameter of 1.0 μm. The model was aligned parallel to the xy plane. Figure 6(b) shows the OPD when illuminating the model on-axis (0°). Figures 6(c) -(h) show the six OPD images produced when using the same six illumination angles as in the proposed optical system, and Fig. 6(i) shows the averaged image produced from the six OPD images. In this case, the resulting averaged OPD image is similar to the on-axis image, but still has a mean error of 10%.
The attempt was made to improve these results using tomographic reconstruction via the optical diffraction tomography algorithm [24]. The tomographic 3D RI map was reconstructed using the same phase maps used for Figs. 6(c) -(h), and an OPD image was reconstructed from the 3D RI map by subtracting the RI value of the medium from the RI value of each voxel and multiplying by the voxel thickness. This converted the value of each voxel from RI to OPD, as OPD = h(nsnm), where h is the object thickness, ns is the RI of the sample, and nm is the RI of the surrounding medium. Once the voxel value had been converted to OPD values, the voxel values were summed along the z-axis to produce the reconstructed OPD image shown in Fig.  6(j). The result is less accurate than averaging, with a mean error of 31%. This is likely caused by reconstructing the 3D RI map using only six projections, which was not enough to produce a high quality tomographic reconstruction of the 3D RI map. As the OPD profile of an object may be significantly different depending on its alignment, the same simulation was repeated using the same model, except with the model cell aligned in the yz plane, as shown in Fig. 7(a). The on-axis illumination image for this case is shown in Fig. 7(b), and the six OPD images produced from the six illumination angles are shown in Figs. 7(c) -(h). By this stage, it was evident that the OPD profile of the six off-axis illumination images was significantly different from that of the on-axis image. Figure 6(i) shows the resulting averaged OPD image produced from the six images, in which the mean error is 63%. The OPD image reconstructed using the ODT algorithm and summation is shown in Fig. 7(j), and is even worse than the averaged image, with a mean error of 101%. Based on these results, it is clear that both the averaging method and the tomographic method were incapable of guaranteeing accurate results in our system when imaging dynamic samples, in which the orientation of sample is unpredictable. Thus, this averaging method should be used for amplitude reconstruction rather than phase reconstruction.

Experimental system
The proposed 6PH system for OOF object rejection, shown in Fig. 8(a), is based on a modified Mach-Zehnder interferometer, which is adapted for high magnification microscopy (in contrast to Ref. [3]). Partially coherent laser light is emitted by a supercontinuum laser source (NKT SuperK EXTREME), followed by an acousto-optical filter (NKT SuperK SELECT) and a laserline filter (central wavelength: 632.8 nm, full width at half maximum: 3 nm), collectively designated LC in Fig. 8(a). This laser light illuminates a diffractive beam splitter DBS (DigitalOptics Corporation), and is split into an 11 × 7 pattern of 77 collimated beams that diverge from the optical axis. These beams then pass through lens L1 (50 mm focal length) to reach the first 50:50 beam splitter, BS1, where they are split into the sample and reference arms of the system. As the beams pass through the lenses, they alternate between travelling in parallel while each individual beam is focusing, and collectively converging onto the optical axis while each individual beam is collimated.
In the sample arm, the 77 beams, traveling in parallel, enter lens L2 (50 mm focal length) and pass through periscope P1, comprised of 2 mirrors located along the optical axis with an angle of 270° between their reflective faces, and a retroreflector mirror with an angle of 90° between its reflective faces. They then pass through lens L3 (focal length150 mm) and all but the six desired sample beams are blocked, as shown in Fig. 8(b). The six beams than pass through a phase delay plate PDP, a set of 2D stairs comprised of six steps of glass, with each step being 2 mm thick, as illustrated in the left of Fig. 8(c). Each beam passes through a different step, which introduces an OPD of 1.16 mm between each of the beams, which is greater than the coherence length of the laser, 42.4 μm, preventing the beams from interfering with each other. The six sample beams then pass through aspheric lens L4 (focal length 12.7 mm), causing them to illuminate the sample S at six different angles from around the optical axis. Beams 1 -4 illuminate the sample at an off-axis angle of 27.5°, while beams 5 and 6 illuminate it at an off-axis angle of 23.4° (in contrast to Ref. [3]). The sample is then imaged by a microscope comprised of microscope object MO (Zeiss Plan-APOCHROMAT, 63× magnification, 1.4 numerical aperture, in contrast to Ref. [3]) and the tube lens L5 (focal length 150 mm). Following this, additional magnification is added by a 4f system (in contrast to Ref. [3]), composed of L6 (focal length 80 mm) and L7 (focal length 125 mm), and, after passing through 50:50 beam splitter BS2, the sample is imaged on the digital camera C (GS3-U3-23S6M, FLIR, 12-bit monochromatic CMOS, 1200 × 1920 square pixels of 5.86 μm each). The total magnification of the microscope is 90×, and the diffraction limited spot size is 452 nm.
In the beginning of the reference arm, the system is virtually identical to that of the sample arm, with lenses L8 and L9 and periscope P2 being identical to lenses L2 and L3 and periscope P1, respectively, except that the retroreflector of P2 is mounted on a translation stage, allowing the optical path lengths of the sample and reference arm to be matched by slightly adjusting the retroreflector position. The purpose of lenses L1 -L3, L8, and L9, in both the sample and reference arm, is to pass the pattern of 77 beams to the PDPs while magnifying the pattern by a factor of 3, making it simpler to block the undesired beams and increasing the off-axis illumination angle after L4. After the beams pass through lens L9, a different set of six beams than those chosen in the sample arm are allowed to pass, while all other beams are blocked. The six reference beams are arranged in a different pattern, as shown in Fig. 8(b), as this pattern will produce the necessary off-axis angles for 6PH. Following this, the six reference beams pass through another PDP, designed to match the different pattern, as illustrated on the right of Fig. 8(c). This introduces the same OPD effect as the sample PDP, as it is designed to ensure that each reference beam will have only one matching sample beam that experiences the same delay, and thus will only interfere with that one sample beam. The beams then pass through lenses L10 (focal length 75 mm), L11 (focal length 75 mm), L12 (aspheric, focal length 50.8 mm), L13 (focal length 50 mm), and L14 (aspheric, focal length 152 mm) and BS2 and interfere with their corresponding sample beams on the digital camera plane, producing the multiplexed six-pack hologram. Reference beams 2, 3, 5, and 6 illuminate the camera at an off-axis angle of approximately 1.5°, and beams 1 and 4 illuminating the camera at an off-axis angle of approximately 2.0°. A neutral density filter, ND (optical density 1.4), is placed in the sample arm between L10 and L11 in order to match the sample and reference beam intensities, due to the large difference in intensity caused by the microscope magnification. A 2-inch thick rectangular prism of glass is placed between L11 and L12, to compensate for the large OPD introduced by the MO.
Once the six-pack hologram is captured by the digital camera, a digital 2D Fourier transform is performed, and the six CC terms are individually cropped and inverse Fourier transformed to produce the six complex wave front images. A background six-pack hologram containing no sample was also captured, and pixel-wise division of the sample complex wave fronts by the corresponding background complex wave fronts was done to remove stationary aberrations.
All lenses in the system are arranged in a 4f configuration, except for lenses L13 and L14 in which the distance between them is 125 mm. This is required in order to create the off-axis angles, while having all six reference beams illuminate the same point on the camera, and still maintain near-identical optical path lengths for the sample and reference arms. The phase delay plates used in this system were hand-made by gluing together the 2 mm thick sections of glass that comprised each step using optical glue, which may have led to undesired reflections and decrease in beam intensities. The system may be improved by using phase delay plates of solid glass. Furthermore, in this system a filtered supercontinuum laser source was used to produce light of a limited coherence length. Other collimated, partially coherent light sources, such as a laser diode, can be used. However, any decrease in the coherence length will decrease the area upon which we see interference on the camera, while increasing the coherence length can lead to cross-talk between the beams. The latter problem can be solved by increasing the thickness of the PDP.

Experimental results
A sample of microalgae (Chlamydomonas reinhardtii) freely swimming in water was imaged at 50 fps, with the six-pack hologram of a single video frame shown in Fig. 9(a). In this multiplexed hologram, there is one in-focus cell and a single OOF microalgae just above. Focusing was done manually, by using a cell resting on the glass slide. The digital Fourier transform of this frame was taken, with the corresponding spatial frequency power spectrum shown in Fig. 9(b), and from the six CC terms the six amplitude images, shown in Figs. 9(c) -(h), were reconstructed. Following this, the amplitude images were pixel-wise averaged to produce the averaged image shown in Fig. 9(i). It can be seen that in each of the six amplitude images the OOF cell is projected onto a different location in the image, with the OOF cell fully overlaying the in-focus cell in Fig. 9(d). If Fig. 9(d) were the only amplitude image available, as in standard holography, the in-focus cell would remain obscured. However, in the averaged image, the OOF cell is averaged out of the image, allowing the in-focus cell to remain visible. In addition, the averaging process acts as a condenser would, decreasing the depth of field of the system. As the average off-axis angle of illumination is 26.1°, which corresponds to condenser of numerical aperture 0.44, the effective depth of field in this system is decreased from 560 nm to 337 nm. Due to this, the averaged image is not identical to any of the singlechannel amplitude images, as the depth of field is significantly smaller than the typical length of the cell, which is 10 μm. Rather, only the region of the cell that is at the focal plane is visible. A video of the single-channel amplitude and averaged images of this sample is shown in Visualization 5.
As the averaged image is constructed from six complex wave front images, it is possible to numerically refocus the image through use of Fresnel propagation, which is illustrated in Fig.  10. Figure 10(a) shows the amplitude image from a single holographic channel, in which there is an in-focus cell with an overlaying OOF cell. In Fig. 10(b) the averaged image is shown, in which the OOF cell is averaged out. Figures 10(c) and (d) show the same images as (a) and (b), respectively, after numerically refocusing by 9.1 μm. A video of the refocusing of the dynamic sample is provided in Visualization 6.
A thick sample of 12 μm polystyrene beads flowing in water was also imaged at 50 fps, with the results shown in Fig. 11 and Visualization 7. Using the process described above, the averaged image was reconstructed from a single frame of the video. The cropped video frame and associated spatial frequency power spectrum are shown in Figs. 11(a) and (b), respectively. In the standard, single channel amplitude image shown in Fig. 11(c), which was reconstructed from a single CC term taken from Fig. 11(b), there are two beads, a single infocus bead in the center, and an occluding OOF bead, seen as a smaller darker circle at the bottom of the image, with the OOF bead almost completely obscuring the in-focus bead. However, in the averaged image shown in Fig. 11(d), captured using the same low-coherence source, the presence of the OOF bead is reduced by 83% (5/6), and thus is barely visible in comparison to Fig. 11(c), allowing the in-focus bead to be seen more clearly, further demonstrating the significant improvement in using this 6PH sectioning approach over using standard holography with the same low-coherence light source. The full video of the dynamic sample is shown in Visualization 7.

Conclusion
A new technique for dynamically rejecting OOF objects in holography using 6PH was presented. We have shown that by averaging the six amplitude images obtained simultaneously by 6PH, it is possible to decrease the contribution of OOF objects in a single location by up to 83%. This method can be combined with synthetic aperture techniques to both decrease the presence of OOF objects and increase lateral resolution. Using 6PH, the averaged images can be obtained from a single camera exposure, and we have shown that the averaged image can still be numerically refocused as in regular digital holography. These techniques can enable holographic imaging of dynamic samples containing multiple objects that can occlude the objects of interest, as in the swimming microalgae sample, allowing sectioned numerical refocusing in samples where this would normally be difficult to obtain. While the averaging technique is effective, by averaging the six laterally shifted images, six less visible images of the OOF object are created. This causes the OOF object to be present in a larger area of the image, limiting the spatial density of the sample, though we have shown that the technique is still effective even at high sample concentrations. While applying the averaging technique to OPD images would be of significant value, we have shown it is not possible to consistently reconstruct accurate OPD images from the six offaxis illumination images produced by our system, as dynamic samples will have significantly different OPD profiles when observed from different perspectives. Averaging in the case where the sample length is aligned with the optical axis results in a quantitative phase image that barely resembles an on-axis illumination phase image of the same object. Furthermore, use of 3D tomographic reconstruction algorithms to reproduce the OPD image also fails in such cases, producing even less accurate results due to the limited number of projection images.
In the future, the presented system can be improved by using separate arrays of lenses for each reference beam, thereby decoupling off-axis interference angle and reference beam magnification, allowing the reference beams to be larger and better match the sample beam size. The SNR can also be improved by using a camera with a larger dynamic range. In addition, the possibility of improving OOF object rejection by training a deep learning network using images from our simulation could be investigated.