Multi-transmitter aperture synthesis with Zernike based aberration correction

Multi-transmitter aperture synthesis increases the effective aperture in coherent imaging by shifting the backscattered speckle field across a physical aperture or set of apertures. Through proper arrangement of the transmitter locations, it is possible to obtain speckle fields with overlapping regions, which allows fast computation of optical aberrations from wavefront differences. In this paper, we present a method where Zernike polynomials are used to model the aberrations and high-order aberrations are estimated without the need to do phase unwrapping of the difference fronts. © 2012 Optical Society of America OCIS codes: (090.1995) Digital holography; (100.3010) Image reconstruction techniques. References and links 1. N. J. Miller, M. P. Dierking, and B. D. Duncan, “Optical sparse aperture imaging,” Appl. Opt. 46(23), 5933–5943 (2007). 2. D. Rabb, D. Jameson, A. Stokes, and J. Stafford, “Distributed aperture synthesis,” Opt. Express 18(10), 10334– 10342 (2010). 3. B. K. Gunturk, N. J. Miller, and E. A. Watson, “Camera phasing in multi-aperture cohering imaging,” Opt. Express 20(11), 11796–11805 (2012). 4. D. J. Rabb, D. F. Jameson, J. W. Stafford, and A. J. Stokes, “Multi-transmitter aperture synthesis,” Opt. Express 18(24), 24937–24945 (2010). 5. R. A. Muller and A. Buffington, “Real-time correction of atmospherically degraded telescope images through image sharpening,” J. Opt. Soc. Am. 64(9), 1200–1210 (1974). 6. R. G. Paxman and J. C. Marron, “Aberration correction of speckled imagery with an image sharpness criterion,” in Statistical Optics, Proc. SPIE 976, 37–47 (1988). 7. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20(4), 609–620 (2003). 8. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25(4), 983– 994 (2008). 9. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66(3), 207–211 (1976). 10. D. Rabb, J. W. Stafford, and D. F. Jameson, “Non-iterative aberration correction of a multiple transmitter system,” Opt. Express 19(25), 25048–25056 (2011). 11. M. P. Rimmer and J. C. Wyant, “Evaluation of large aberrations using a lateral-shear interferometer having variable shear,” Appl. Opt. 14(1), 142–150 (1975). 12. G. Harbers, P. J. Kunst, and G. W. R. Leibbrandt, “Analysis of lateral shearing interferograms by use of Zernike polynomials,” Appl. Opt. 35(31), 6162–6172 (1996). 13. S. Okuda, T. Nomura, K. Kamiya, and H. Miyashiro, “High-precision analysis of a lateral shearing interferogram by use of the integration method and polynomials,” Appl. Opt. 39(28), 5179–5186 (2000). 14. F. Dai, F. Tang, X. Wang, P. Feng, and O. Sasaki, “Use of numerical orthogonal transformation for the Zernike analysis of lateral shearing interferograms,” Opt. Express 20(2), 1530–1544 (2012). #173589 $15.00 USD Received 6 Aug 2012; revised 21 Oct 2012; accepted 2 Nov 2012; published 8 Nov 2012 (C) 2012 OSA 19 November 2012 / Vol. 20, No. 24 / OPTICS EXPRESS 26448 15. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, 2004). 16. J. D. Schmidt, Numerical Simulation of Optical Wave Propagation (SPIE, 2010). 17. R. C. Gonzalez and R. E. Woods, Digital Image Processing (Prentice Hall, 2007). 18. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company, 2010).


Introduction
Coherent imaging is a growing area of research where objects are imaged using laser light.The reflected wavefront is captured by an imaging system some distance away.Digital holography is used to capture the optical phase of the light incident on the receiver.For typical diffuse objects the reflected light creates fully developed speckle at the receiver, which limits imaging system performance and complicates many approaches currently used to determine atmospheric aberrations.
Coherent aperture synthesis is a technique which enables high-resolution imagery by synthesizing a large aperture through the combination of image data from multiple smaller subapertures [1,2].Each sub-aperture captures the pupil field using some holographic imaging technique, then the measured sub-aperture fields are placed in a common pupil plane corresponding to the physical locations of the sub-apertures, and the composite pupil plane field is Fourier transformed, thus forming a digital image.Using a sharpness measure (applied on the image formed), the inter-aperture aberrations (including piston, tip, tilt, rotation, and shift) and intra-aperture aberrations (such as defocus, astigmatism, coma) can be corrected [3].
The multi-transmitter aperture synthesis idea has been recently proposed in [4].By using multiple transmitters at different locations, and turning on one transmitter at a time, an aperture captures shifted pupil fields.This effectively turns a single, physical aperture into a multiaperture imaging system, where coherent aperture synthesis technique could be used to obtain high-resolution imagery.The technique requires that target motion orthogonal to the line of sight over the course of the measurements is either known or is small with respect to the aperture's resolution; while the system is also sensitive to piston motion on the order of the wavelength the phase error can readily be found and corrected due to the resulting overlapping pupil data.When the apertures are sparsely distributed, the aberrations are estimated by defining and optimizing a sharpness measure on the object image [5][6][7][8].The aberrations are typically modeled with Zernike polynomials [9], and the problem is defined as calculating the optimal weights of Zernike polynomials to maximize the sharpness measure.While this approach has been demonstrated to be effective in multi-transmitter aperture synthesis [4], the downside is the computational complexity of the required optimization process.
Instead of forming a set of sparsely distributed apertures, the transmitter locations can be placed close enough so that the pupil field captured by the shifted apertures overlap [10].Overlapped aperture data is used to estimate the aberrations common to all of the aperture realizations based on techniques similar to lateral shearing interferometry.Essentially, the wavefront difference in the overlap region is derived in terms of aberration parameters, which are then estimated from the measured data.The technique is based on the assumption that the static and atmospheric aberrations present in the aperture aside from piston are constant across the measurements; this requires that all measurements are recorded within the atmospheric coherence time in order to reconstruct the dynamic atmospheric aberrations.In [10], the idea is demonstrated with low-order aberrations by modeling the aberrations with bivariate polynomials.Instead of bivariate polynomials, Zernike polynomials could be used to model optical aberrations.While Zernike polynomials form a complete orthogonal basis on a circular pupil, their shifted differences are not necessarily linearly independent.In the lateral shearing interferometry literature, the difference front has been modeled in various ways, including Zernike polynomials [11], Zernike polynomials with largest possible elliptical domain [12] or largest possible circular domain [13] in the overlap region, and numerical orthogonal polynomials [14].
In this paper, we present a method to estimate aberrations for multi-transmitter aperture synthesis.The aberrations are modeled with Zernike polynomials and the difference front is directly used without any polynomial fitting unlike the method in [10].We segment the difference front into sub-regions; each sub-region has an unknown phase offset, which is added to the set of linear equations to be solved.Zernike coefficients that model the aberrations are estimated along with the phase offsets without the need to do 2D phase unwrapping of the difference front.The estimated Zernike coefficients are then used to compensate for the modeled aberrations.

Proposed method
As illustrated in Fig. 1 and described in [4,10], the modeled multi-transmitter system illuminates a scene with one transmitter at a time and independently captures the backscattered fields in the pupil plane.Because of the shift in the transmitter locations, the backscattered target field U b (x, y) is shifted by the same amount (in the reverse direction).On the other hand, the phase error is static because the sensor is fixed.Suppose that we measure two pupil-plane fields, U 1 (x, y) and U 2 (x, y), each with a different transmitter locations: where P(x, y) is the pupil function, W e (x, y) is the wavefront error, and (x 1 , y 1 ) and (x 2 , y 2 ) are the shift amounts in the backscattered fields due to transmitter locations.Let W b (x, y) be the wavefront of U b (x, y), then the detected wavefronts are The object wavefront is not well defined and may contain several branch points, so it is desirable to calculate the aberration wavefront independent of the target wavefront.It is possible to numerically remove the dependence on the backscattered wavefront by first registering the wavefronts W i (x, y) according to the shift amounts.The registration can be achieved based on calibrated measurement of the shifts due to transmitter locations, as well as by registering the pupil plane speckle intensity returning from the target [10].By taking the difference between the wavefronts in the overlapping area (accomplished by multiplying one complex field by the complex conjugate of the other), the wavefront difference (the difference front) becomes In other words, after registration, the difference between the phase maps of measured fields is essentially due to the wavefront error.The goal is to estimate the wavefront error and compensate for that.Note that ΔW (x, y) may have phase wraps although not explicitly written in (3), and we will explain how to handle phase wrapping issue shortly.
We model optical aberrations with Zernike polynomials.Let W e (x, y) = ∑ k a k Z k (x, y), where Z k (x, y) is a Zernike polynomial and a k is the unknown coefficient corresponding to Z k (x, y).Then, the wavefront difference becomes   where we defined ΔZ k (x, y) At this point, one may propose to solve for a k by forming a linear set of equations for all positions (x, y).There are, however, two important issues.First, the difference front ΔZ k (x, y) does not form an orthogonal basis, resulting in a set of equations that are not linearly independent.The problem could be alleviated by using multiple measurements corresponding to transmitter separations along different directions giving additional overlapping regions, which could result in an over-or well-determined system necessary for the proposed approach to successfully calculate the desired aberration coefficients.Second, the difference front ΔW (x, y) needs to be phase unwrapped.Looking at Fig. 2(a), we see that there could be phase jumps of 2π from one region to another.That is, the equation ( 4) is not valid for all pixels in the overlap region.On the other hand, the overlap region could be divided into sub-regions, where there is no phase wrap as illustrated in Fig. 2(b); and within each sub-region, we could write (4) with a constant but unknown phase offset.In sub-region m, the wavefront difference at the ith position in that sub-region is where α (m) is the unknown phase offset , ΔZ i ) is the difference between Zernike polynomials, and a k is the coefficient of the kth Zernike polynomial.Let N m be the number of pixels in sub-region m and M be the number of sub-regions, then we form the following linear set of equations: 1 , y (1) 1 , y (1) 1 , y N 1 , y (1) N 1 , y (1) 1 , y (2) The unknown parameters α (1) , ..., α (M) , a 1 , ..., a K can be estimated in a number of ways; in this paper they are estimated using the QR decomposition based pseudo-inverse operation (the "backslash" operator in MATLAB).The Zernike coefficients a 1 , ..., a K are then used to correct for the aberrations in each aperture.
Note that the matrix equation ( 6) is not limited to two transmit realization of the aperture.If there are more than two transmitters, the sub-regions from any pair of apertures are included in this matrix equation.

Experimental results
We present several experiments to demonstrate the method.A three-transmitter imaging system shown in Fig. 3(a) is simulated.The simulation models an optically diffuse object at range, L, which is flood illuminated by a coherent laser source of wavelength λ .The complex-valued field reflected off the object is modeled with amplitude equal to the square root of the object's intensity reflectance and phase a uniformly distributed random variable over −π to π.The complex-valued field in the receiver plane, subject to the paraxial approximation, is given by the Fresnel diffraction integral [15].The Fresnel diffraction integral is numerically evaluated using the angular spectrum propagation method [16].The object plane and receiver pupil planes in the simulation consisted of N = 2048 × 2048 computational grids with identical 182μm sample spacings in both planes.The optical wavelength, λ , is 1.55μm , and the range, L, from (h) the receive pupil plane to the object is 100 meters.The numerical propagation consists of 10 partial propagations of 10 meters each to avoid the wraparound effects.The optical field in the receiver pupil plane is collected using a 48mm diameter aperture with three transmitters in the configuration shown in Fig. 3(b).The focused optical fields are then aberrated using randomly weighted Zernike polynomials.
To estimate the aberrations, we register the pupil fields according to the transmitter locations and take the difference between the wavefronts in the overlapping regions, as shown in Fig. 4(a) to 4(c).There are three overlap regions.Within each overlap region, we determine the sub-regions through segmentation.Our segmentation procedure is as follows.We first detect the edges using the Canny edge detector [17].The edge lines are morphologically dilated (with a 3 × 3 kernel) to have a wider coverage of discontinuities.Region segments outside the edges form the sub-regions.The extracted sub-regions are shown in Fig. 4(d) to 4(f).As seen in these sample results, the segmentation procedure may result in over-segmentation, however, this is not an issue.The opposite (under-segmentation), on the other hand, would be an issue as multiple regions with different phase offsets would be forced to have the same phase offset, which would degrade the estimation of the Zernike coefficients.With over-segmentation, the only concern is the introduction of additional phase offset parameters to be estimated, and therefore an increase in the computational cost.One possible approach to reduce computational cost is to discard small sub-regions, which should not affect the performance as the majority of pixels are included.In our experiments we discard sub-regions that are less than 20 pixels, and place the remaining sub-regions into the matrix equation (6).
In conventional shearing interferometry the difference front is interpreted as the slope or gradient of the aberration at a given location along the direction of the shear, here we instead use a matrix based approach to give a system of equations to relate the observed difference fronts   and those that would be expected from a particular aberration.Through a pseudo-inverse operation we obtain the least squares solution minimizing the difference between the observed and calculated difference fronts.This linear algebraic approach without the need for unwrapping allows the aberration to be estimated efficiently even with a complex target scene present.
An image formed using one of the three transmit locations is shown in Fig. 5(a).The phase aberration obtained through solving ( 6) is shown in Fig. 5(b).The aberration is found for a single speckle realization, then the aberration correction is applied to 30 different speckle realizations, which are finally averaged with the result shown in Fig. 5(c).We can synthesize a larger aperture by placing all three pupil fields in a common plane; the resulting composite image, averaged over 30 realizations, is shown Fig. 5(d), which has a higher spatial resolution than Fig. 5(c).
We also wanted to evaluate the performance of the algorithm with noise present.In addition to the phase aberrations, we corrupt the pupil field data with additive white Gaussian noise [18].Here the SNR is defined as the average ratio of the signal intensity to the noise intensity for all the pixels in the coherent receiver, this would be equivalent to the number of target photo-electrons collected at each pixel for a shot noise limited system.We calculate the root mean square (RMS) wavefront reconstruction error for different noise amounts and wavefront aberrations.In an experiment, wavefront aberration is simulated by randomly choosing the coefficients of Zernike polynomials from a specific range, and its RMS value is recorded; in addition, random noise with a specific signal to noise ratio is created for each realization.The actual wavefront is degraded with the simulated wavefront aberration and noise; the proposed algorithm is then used to perform the restoration, and the RMS wavefront reconstruction error is calculated.We repeat the experiment 100 times for each signal to noise ratio, Zernike aberration order and coefficient range.The overall results are shown in Fig. 6.For lower strength (h)

Fig. 1 .
Fig. 1.Multi-transmitter system.The scene is illuminated with one transmitter at a time.A shift of Δx in the transmitter location results in a shift of −Δx in the backscattered field.

Fig. 4 .
Fig. 4. (a) to (c) show the phase difference in overlap regions.(d) to (e) show the subregions within each overlap region.

#(Fig. 5 .
Fig. 5. (a) Image formed at an aperture by averaging 30 speckle realizations.(b) Estimated phase error using the proposed algorithm.(Units are in waves; up to fifth order Zernike polynomials are used.)(c) Aberration corrected first aperture, corresponding to (a).(d) Composite formed by all three aberration-corrected apertures.

Fig. 6 .
Fig. 6.Root mean square error in the wavefront reconstruction as a function of root mean square wavefront aberration.

Fig. 7 .
Fig. 7. Root mean square error in the wavefront reconstruction as a function of signal to noise ratio in the pupil field.

Fig. 8 .
Fig. 8.A sample restoration, where the RMS wavefront aberration is 0.4951 and the signal to noise ratio is 100.(a)-(c) The difference fronts and the corresponding sub-regions are displayed.(d) Actual wavefront aberration.(e) Estimated wavefront aberration.(f)-(g) The difference between the actual and estimated wavefront aberrations.The RMS wavefront reconstruction error is 0.0403.Note that in (d)-(f), the same colormap is used for comparison purposes.