Mechanically scanned interference pattern structured illumination imaging

We present a fully lensless single pixel imaging technique using mechanically scanned interference patterns. The method uses only simple, flat optics; no lenses, curved mirrors, or acousto-optics are used in pattern formation or detection. The resolution is limited by the numerical aperture of the angular access to the object, with a fundamental limit of a quarter wavelength and no fundamental limit on working distance. While it is slower than some similar techniques, the lack of a lens objective and simplification of the required optics could make it more applicable in difficult wavelength regimes such as UV or X-ray. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

equivalent to those mentioned above for shadow imaging, most SI methods utilize lenses for pattern formation and projection [36][37][38].
With IPSII, interfering coherent beams make high-resolution patterns without the need for a projection lens and with no fundamental limit on working distance. However, for various reasons, most forms of IPSII employ lenses or other limiting optical elements. In structured illumination microscopy (SIM) [3,5], interference patterns are projected through the microscope objective of a conventional microscope, effectively making each pixel a 2 × 2 IPSII image, doubling the resolution [4] and improving optical sectioning [39]. Later research created interference patterns directly on the target, without passing through a lens, decoupling resolution and DOF [1,7,8]. These experiments used a small number of fixed beam angles, limiting the FOV for a single-pixel detector. As such, it was augmented by traditional imaging to increase the number of pixels in the image.
Further research with structured illumination includes techniques such as SPIFI [37,38] and CHIRPT [12,13] that multiplex spatial information into the signal frequency spectrum. A similar technique, DEEP [9,10,12], shows that spatial frequency can also be effectively multiplexed onto the signal time frequency spectrum. More recently, F-basis [11] demonstrated single-step 3D imaging with all information stored in the temporal frequency spectrum of the temporal signal of a single detector. DEEP and F-basis use acousto-optics to split the illumination into two or more beams [40] that can be quickly scanned, but are still limited by an objective lens used to recombine the beams. Axial structured imaging [14] was developed as an alternative to optical sectioning or light-sheet microscopy.
In this paper we present theory and proof-of-principle experiments of a truly lensless, singlepixel mechanically scanned IPSII technique that requires only flat mirrors and flat beam-splitters. Our design is based on an interferometer with computer controlled mirrors, used to form variable interference patterns which illuminate the object. This allows measurement of arbitrary spatial frequency components, resulting in a pixel count limited only by the precision of the mirrors, and an FOV limited only by the size of our laser beams. The resolution is limited by the numerical aperture of a single beam splitter. We show how this design could be used to measure phase as well as intensity of light passing through an object, essentially allowing for digital holography with a single pixel detector and an arbitrary FOV and working distance. With straightforward back-propagation methods, this would produce 3D images of absorptive, transparent, or complex objects.
It has been noted that the speed of mechanical scanning methods would be limited [10], and this is indeed the case with our design. We have not attempted to optimize the speed of our method, as fast optical IPSII technologies already exist (such as F-basis), and our method is not a good candidate for high speed optical microscopy. However, reasonable imaging speeds should be obtainable since the mechanical requirements of our system are similar to some widely-used mechanically-scanned implementations of LIDAR and confocal laser scanning microscopy.
The key advantage to MAS-IPSII is the lack of focusing elements, which should make it better suited than similar methods to UV and x-ray imaging at high resolutions. Even with small angles, x-ray IPSII could push the state of the art in x-ray resolution, as IPSII can achieve wavelength-scale resolution with an angle scan range of just 15 • per beam. Another advantage of MAS-IPSII is that it reduces IPSII to its simplest form, with two directly controllable beams with well defined, separate, measurable, and manipulatable beam modes that stay consistent throughout the measurement process. This allows us to experiment with issues of concern with all IPSII methods, such as the effects of wavefront distortions, errors in fringe angle and spacing, and the effect of wavenumber-dependent shadows and glare.

IPSII signal equation
In this section we derive the signal measured by the detector. This derivation applies to many forms of IPSII. It does not apply to methods which use more than two beams at a time [5] without modulating the signal generated from individual beam pairs at different frequencies (as in DEEP and F-basis) such that they can still be considered separately.
In IPSII, coherent beams overlap to create an interference pattern. A photodetector with a uniform response over the scale of the object measures reflection off or transmission through the object, yielding information about the overlap between the interference pattern and the object. This signal is measured for different interference patterns, generated by varying the angle between the beams. We assume two beams (laser beams in our case) with the same wavelength λ, aside from a small frequency offset ∆ω, which is negligible except where explicitly included in the following equations. The frequency offset sweeps the phase, causing fringes to move across the object and giving measurements of each spatial frequency at variable phases. An alternate method is to make a measurement at four discrete phases [31].
We assume a 2D object-light interaction m(x, y), which describes the object's response in amplitude and phase to an incident wave. For example, depending on whether the detector is placed in front or behind the object, m(x, y) may represent the complex Fresnel reflectivity or transmissivity, respectively, of the object at a given point into the solid angle subtended by the detector from that point. We also define M(x, y) = |m(x, y)| 2 . In the case of reflectivity or transmissivity, for example, this is the function that would be measured in conventional imaging-all phase information is lost in M(x, y).
The two lasers beam profiles are described by the complex transverse mode functions where x 1,2 and y 1,2 are the transverse beam coordinates, the real functions A 1,2 are the transverse field amplitudes of the modes, and the ϕ 1,2 functions represent the position dependent phases of the modes. If the wavefronts are flat, the resulting patterns will be sinusoidal in nature, such that individual measurements obviously correspond to spatial frequency components of the object. If the wavefronts are not flat, we show that the signal equation may still be put in terms of a Fourier transform. We assume imaging occurs within a small enough volume that we can ignore diffraction of the beam mode (i.e. we have dropped the axial dependence of the transverse amplitude and phase profiles aside from the phase propagation). The resulting models for the electric fields of the two beams are, where ì k 1,2 are the individual beam wave vectors, and differ from each other only in direction (neglecting the small frequency shift noted earlier) such that The lasers overlap on the object at an angle θ 1 + θ 2 from each other (see Fig. 1) and are oriented at an azimuthal angle φ. For this derivation, we assume they are symmetrically oriented around the z-axis of the object plane (i.e. θ 1 = θ 2 = θ) such that the difference between the k 1,2 vectors is in the x, y plane at an angle of φ from the x-axis. They are also positioned such that the centers of each mode are overlapping (i.e. x 1,2 = y 1,2 = 0 at x = y = 0). The mode of each laser is projected onto the object plane, resulting in a transform from x 1,2 and y 1,2 to x and y that is a function of θ and φ, though for small values of θ the transform is trivial (x 1,2 ≈ x and y 1,2 ≈ y). The intensity profile I of the interference pattern resulting from the overlapping beams contains a term constant in time plus an oscillating portion that is the product of an oscillating sinusoidal interference pattern and the individual modes, where k x = 2k l sin θ cos φ and k y = 2k l sin θ sin φ. The spacing of the fringes in the interference pattern (ignoring the contributions from A 1,2 ) is given by where n is the index of refraction of the medium. These interference fringes may be characterized by another vector k xx + k yŷ (not to be confused with the k-vectors of the lasers, ì k 1,2 ). The measured signal is proportional to the total power reflected from the object, or transmitted through the object, where C is a constant in time. (Note that M and A 1,2 are functions of x, y, though we have stopped explicitly calling that out in our notation for brevity.) This signal is further processed by performing dual-phase demodulation to extract the quadrature oscillating components of the signal. This results in the complex time averaged signal s(k x , k y ) (note that the complex phase of s represents the phase of s instead of the phase of the electric field waves). We also define a function combining the object with the beam profiles M (x, y) = M(x, y)A 1 (x, y)A * 2 (x, y). The signal equation then simplifies to which is easily recognizable as the Fourier transform of M (x, y) evaluated at (k x , k y ) in k-space. The transform in Eq. (8) becomes significantly more complicated if there is a non-negligible dependence on k x,y in M , and we are not aware of any general method of inverting such a transform, even if this dependence is known. Such k-dependence arises in A 1,2 because of the angle dependence of the transform from x 1,2 , y 1,2 to x, y. It may also arise when M(x, y) itself is dependent on k x,y which may occur if sampling only the light scattered in a single direction because of changes in the 'glare' off of the object as the direction of the illumination changes.
The k x,y dependence of A 1,2 can be made negligible under appropriate experimental conditions discussed latter. The k x,y dependence of M(x, y) can be made negligible by sampling a large solid angle. For example, a large detector can be placed directly behind the object for transmission imaging, or for reflection imaging an integrating sphere (with slots cut for beam access) or the average signal from multiple detectors at various angles could be used.
This treatment also applies to purely intensity dependent (i.e. incoherent) interactions such as fluorescence imaging or diffuse reflection, with a slight modification. If the object function is better described directly as an intensity response M(x, y) (e.g. describing a fluorophore density and response, which would be inappropriate to describe with a complex response m including phase shifts) then the object-light interaction is better modeled by directly multiplying the function M and the intensity of the interference pattern (instead of the electric fields). In this case, however, the derivation above proceeds identically from the second line in Eq. (7). While the end result is the same in this case, some modifications of the experiment, such as placing the object in just one beam, give different results.

IPSII imaging methods
There are several imaging opportunities readily apparent in Eq. (8). The process directly measures spatial frequency components of M (x, y) in k-space. After measuring sufficient information in k-space, a simple inverse transform gives you M (x, y), which is the product of three functions M(x, y), A 1 (x, y), and A 2 (x, y). Any one of these three may be effectively measured if the other two are known, or constant with respect to x and y. Any such method must also deal with the k x,y dependence of the beam profiles.
One such method is to make an intensity image of an object (i.e. M(x, y)). This could be a measurement of transmission or reflection, or even of incoherent processes such as fluorescence. This would be done using known, smoothly varying beam profiles. If the beam intensity is roughly constant over the object, the k x,y dependence of A 1,2 is removed, up to an overall cos θ intensity dependence, which may be exactly compensated for. A demonstration of this method is discussed in section 5.
Another imaging opportunity apparent in Eq. (8) is the possibility to measure complex fields. By setting m(x, y) and A 2 (x, y) to be constants, one can measure A 1 (x, y). This could be used to characterize the wavefronts of a laser beam. Or, by inserting an object into the beam, the complex transmission or reflection of the object, propagated to the measurement location, can be measured. By back propagating the result (which may be done with the full phase information), the full 3D complex object could imaged. Similar to other structured illumination holographic methods (see [41][42][43][44] for example), this could be useful as a single pixel method of acquiring high resolution digital transmission holograms of an object without a high resolution detector array.
Ideally, for hologram measurements only the angle of the reference beam would be scanned relative to the detector screen to avoid changing the projection of the light-field to be measured during the scan. Because only one beam is scanned and the beams are no longer symmetric about the z-axis, the fringe spacing given in Eq. (6) would be modified to d = λ/(n sin θ cos θ), decreasing the maximum resolution of this light field imaging by a factor of 2 relative to the object imaging resolution given in Eq. (9).
A third possible use of Eq. (8) is a simple method to characterize the intensity profile of a laser. While this could also be done with full phase information, using the aforementioned holographic imaging technique, doing so would require a quality reference beam (e.g. from spatial filtering, or using a separate, phase-locked laser with a clean mode). If only the intensity profile is needed, both beams can be simply derived from the same source (as in the interferometers described in section 5), such that A 1 (x, y) = A 2 (x, y). If M(x, y) is set to a constant M(x, y) simplifies to | A 1 (x, y)| 2 = | A 2 (x, y)| 2 . This method is illustrated in section 5.

IPSII imaging properties
The resolution in IPSII is related to the minimum fringe width, which depends on the maximum beam angle used. The pixel size dx min for a maximum angle between the interfering waves θ max is In IPSII, any point within the volume where the beams overlap for all beam angles will be imaged and in focus [10]. The fringes in an interference pattern are planar, and do not change in the z direction, so everywhere in the imaging volume is equally 'in focus' [1]. There is, however, an effective FOV due to the properties of Fourier transforms. Objects within the interference volume but outside of the FOV will be aliased onto the FOV [45]. The FOV of the reconstructed image depends on the spacing of measured points in k-space dk, and is given by The FOV may also be limited by the response region of the detector or the beam size. In fact, by intentionally limiting the field of view (using an aperture, for example), aliasing can be eliminated. One problem which could impose similar constraints as a DOF could occur when an object with protruding features casts shadows in the interference pattern, blocking one of the beams in some areas of illumination. The illumination in these areas will not oscillate, and will not contribute to the k-space measurement. Unfortunately, these shadows change with each k-space measurement, so the effect is more complicated than shadows in conventional imaging. Numerical calculations suggest that this tends to add distortions around protruding features. While this could be useful for identifying height changes, it may also distort an image beyond the point of being useful. This effect, which should be manifest in other forms of IPSII, appears to be an unexplored topic, and further work is needed to fully understand it.
The speed of image acquisition can be limited by mechanical limitations (as in our current implementation, discussed later), or by photon noise. Because each measurement collects light from the entire object, the photon noise limits in IPSII are equivalent to those of conventional wide-field imaging. If imaging speed is limited by photon noise, and if local intensity is limited to prevent photodamage or photobleaching, IPSII can, in principle, be much faster than rastering techniques. For an N pixel image, wide-field techniques like IPSII can be a factor of N faster than methods in which light is collected only from one region at a time. For 3D imaging, IPSII has an advantage over traditional optical sectioning [11], which must reject out-of-focus light with each measurement. Furthermore, because a multi-pixel detector is not needed, a wider variety of detector technologies are available, potentially reducing detector noise and shortening integration times. Two designs based on a Mach-Zehnder interferometer are presented. Both use computer controlled mirrors and a single piezomounted mirror for a phase sweep. The Mach-Zehnder layouts allow the angle between beams to vary from positive to negative and through the zero point. The first (a) requires a minimal number of optics, but the maximum angle is limited by the beam size, as the beam overlap diminishes with angle (demonstrated in by the picture on the right). This was the setup used to generate the 1D images shown in Fig. 3. The second implementation (b) adds another pair of mirrors to keep the beams centered during the angle scan. It also includes a bowtie configuration after the first beam splitter to simplify balancing the path lengths using the translation stages indicated by white arrows. This was the setup used for the 2D images shown in Fig. 4.

Experiment
IPSII requires (at least) two overlapping coherent beams, with spatial and temporal coherence lengths greater than the desired DOF and FOV, and a method to control the angles of the beams. We also need a way to scan and measure the relative phase of the two beams. To avoid aliasing, IPSII also needs a method to mechanically limit the FOV. Figure 2 shows two schematics we have implemented, both based on a Mach-Zehnder interferometer. The designs allow measurement of both positive and negative spatial frequencies. This can be helpful in practice to compensate for some beam wavefront imperfections, and would be necessary to implement holography as discussed in section 3. They also produce two outputs which are equal up to a π phase shift. We use one pattern to illuminate the object, and the other to illuminate a pinhole used as a phase reference. Separating the pinhole from the target object is convenient, but not strictly necessary.
The frequency difference in our setup is generated by linearly scanning the length of one arm of the interferometer with a mirror mounted on a piezo-electric transducer. Other similar phase scanning methods [46] could be used. Alternatively, acousto-optics or moving diffraction gratings could be used, as in other IPSII related methods. The advantage of acousto-optics is that the modulation is at a much higher frequency, which allows data to be taken faster. Higher frequency modulation also avoids noise at lower frequencies, which generally leads to a better signal-to-noise ratio. Some other methods using acousto-optics also use it as an angle scanning method [9][10][11], but in this case a high-NA lens is needed to reconverge the beams and amplify the angle, which diminishes some of the advantages of IPSII. To take an image, the mirrors are set to produce a particular interference pattern. Then the phase of the interferometer is ramped using the piezo-mounted mirror. Digital lock-in detection is applied to determine the quadrature components of the object signal relative to the signal from the detector behind the pinhole. These give the phase and amplitude of the spatial frequency component for the given interference pattern. The mirrors are then repositioned to create a different pattern, and this process is repeated. Once all Fourier coefficients have been measured, a simple inverse Fourier transform reconstructs the image. The data gathering and interpreting process is demonstrated in Fig. 3 using the simple 1D imaging setup shown in Fig. 2(a). The figure shows both an 'image' of a 1D object (a pair of vertically oriented wires) as well as a measurement of the laser beam profile, using the beam measurement method described in Sec. 5. The raw data for one pattern and the power spectrum of the resulting k-space measurements are shown, along with the image reconstructions.
The speed of our setup was mainly limited by the equipment available to us. We take data at a rate of about 1 k-space point/second, so that 2D image scans can take hours or days depending on the pixel count (the 2D images presented in this paper were taken in about a day). However, with better equipment and engineering the speed of the process could be reduced to just the speed of the raster scan of the mirror setup. For example, commonly available rotation stages with sufficient precision for 10 3 pixels per row could scan at least one k-space row/second, or about 15 minutes for a 1 megapixel image. Use of galvos or spinning mirrors could also be used to greatly increase the speed, but would also increase the engineering complexity.
Other ways to speed up image acquisition in our method (and other IPSII techniques) include various techniques developed for magnetic resonance imaging (MRI), such as partial Fourier reconstruction [47], parallel imaging [48], and compressive sensing [49]. Parallel imaging with IPSII has been demonstrated using conventional imaging optics [1]. Parallel imaging could be improved using the algorithms developed for MRI, such as SENSE [50] or GRAPPA [51]. These schemes only require the detectors to have slowly varying and different spatial response functions. These auto-calibrating algorithms could be implemented to perform parallel imaging without a lens, or with a low quality or poorly focused lens, as aberrations and focal blur mainly affect the individual detector responses (i.e. the object area contributing to the signal for each sensor), which are 'stitched' together by the auto-calibration. The speed up would be proportional to the number of sensors (e.g. the number of pixels in a sensor array) .  Figure 4 shows the 2D image reconstructions resulting from imaging a USAF 1951 test target with varying FOV and resolution. These data were taken with the setup shown in Fig. 2(b), with a working distance (i.e. between the last beam splitter and the object) of about 100 mm. The measured resolutions agree with theoretical expectations for the maximum angle used. We tested resolutions to about 2µm, corresponding to an effective NA of . 12. As resolution appears only limited by the range of our motorized mounts and mirror size, higher resolution should be possible with a setup that allows for larger angles.

Results
The combination of effective NA and working distance are about at the limit of commercially available ultra-long working distance microscope objectives. Pushing past that mark with a setup like the one we used would only require larger mirrors. Optically flat mirrors are commercially available with diameters much greater than the commercially available lenses with an NA>.1.
The signal to noise ratio (SNR) in the images we took was limited by amplitude noise in our laser and technical noise in our digitization equipment (all of which were limited by equipment budget), and could be readily improved by better equipment and greater attention to signal engineering and noise isolation. Another limitation was the small range of our piezo actuator, which limited the signal to only 5-10 phase oscillations averaged over in our digital lock-in method. These limitations could be overcome by using other mechanical phase scanning methods [46]. Alternatively, use of an AOM could easily push ∆ω into the MHz or GHz range, where demodulation could be done with analog electronics, greatly improving the lock-in detection and enhancing SNR. In this case imaging speed would only be limited by the scan speed of the beam angle.

Conclusion
We have presented a method for lensless, single pixel, interference pattern structured illumination imaging using a mechanical angle scan. We derived a signal equation for our technique (and generally applicable to most two beam IPSII techniques) that includes effects from distortions in the wavefronts of the illumination. Our derivation describes how IPSII effectively measures the Fourier transform of the product of an object and two beam mode functions. We also discussed how this could be used to measure either an object, the mode of a laser, or to hologhically measure a 3D complex object. We demonstrated imaging of objects and the intensity of a laser by imaging 1D profiles of a laser, the shadow of a 1D test target, and 2D imaging of a resolution test target.
Our technique varies from related IPSII techniques in that it only requires simple flat optics (beam-splitters and mirrors). It does not require a lens, acousto-optics or custom engineered diffraction gratings. We generate the variable angles needed for IPSII instead by using a mechanical angle scan. This severely limits the speed of the process, making it an inferior candidate for many optical imaging applications. However, the lack of lens or other complicated optics could make it useful for a variety of cases and make it an appealing candidate for imaging with deep UV, X-rays, or other waves for which focusing elements are unavailable or impractical. Because it removes many of the technical complications that exist in related IPSII techniques, it may also be used to more easily isolate and study issues related to IPSII imaging, such as wavefront distortions, shadows, positioning errors in k-space, etc. It is relatively easy and inexpensive to implement compared to other IPSII techniques, and could be useful for low-cost high-resolution imaging applications where speed is less critical.

Funding
Brigham Young University's College of Physical and Mathematical Sciences; The National Defense Education SMART Fellowship program.