High-resolution speckle imaging through strong atmospheric turbulence

We demonstrate that high-resolution imaging through strong atmospheric turbulence can be achieved by acquiring data with a system that captures short exposure (“speckle”) images using a range of aperture sizes and then using a bootstrap multi-frame blind deconvolution restoration process that starts with the smallest aperture data. Our results suggest a potential paradigm shift in how we image through atmospheric turbulence. No longer should image acquisition and post processing be treated as two independent processes: they should be considered as intimately related. ©2016 Optical Society of America OCIS codes: (010.1330) Atmospheric turbulence; (100.1455) Blind deconvolution; (100.3020) Image reconstruction-restoration; (100.5070) Phase retrieval; (100.6640) Superresolution; (110.4155) Multiframe image processing. References and links 1. S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution,” Astrophys. J. 415, 862–864 (1993). 2. T. Schulz, “Multiframe blind deconvolution of astronomical images,” J. Opt. Soc. Am. A 10(5), 1064–1073 (1993). 3. D. Fried, “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am. 56(10), 1372–1379 (1966). 4. B. Calef, “Improving imaging through turbulence via aperture partitioning,” Proc. SPIE 7701, 77010G (2010). 5. J. P. Bos and B. Calef, “Using aperture partitioning to improve scene recovery in horizontal long-path speckle imaging,” Proc. SPIE 9614, 961407 (2015). 6. B. Calef, “Weak signals and strong turbulence,” presentation to the AFOSR PROTEA group, September 10, 2011. 7. J. M. Ivanov and D. McGaughey, “Image reconstruction by aperture diversity blind deconvolution,” in Proceedings of the AMOS Conference (2007). 8. E. Thiebaut and J.-M. Conan, “Strict a priori constraints for maximum-likelihood blind deconvolution,” J. Opt. Soc. Am. A 12(3), 485–492 (1995). 9. N. Miura, “Blind deconvolution under band limitation,” Opt. Lett. 28(23), 2312–2314 (2003). 10. S. M. Jefferies, M. Lloyd-Hart, E. K. Hege, and J. Georges, “Sensing wave-front amplitude and phase with phase diversity,” Appl. Opt. 41(11), 2095–2102 (2002). 11. A. Foi, M. Trimeche, V. Katkovnik, and K. Egiazarian, “Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data,” IEEE Trans. Image Process. 17(10), 1737–1754 (2008). 12. W.-Y. V. Leung and R. G. Lane, “Blind deconvolution of images blurred by atmospheric speckle,” Proc. SPIE 4123, 73–83 (2000). 13. R. Vio, J. Bardsley, and W. Wamsteker, “Least-squares methods with Poissonian noise: analysis and comparison with the Richardson-Lucy algorithm,” Astron. Astrophys. 436(2), 741–755 (2005). 14. M. R. Stoneking and D. J. Den Hartog, “Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties,” Rev. Sci. Instrum. 68(1), 914–917 (1997). 15. L. M. Mugnier, T. Fusco, and J.-M. Conan, “MISTRAL: a myopic edge-preserving image restoration method, with application to astronomical adaptive-optics-corrected long-exposure images,” J. Opt. Soc. Am. A 21(10), 1841–1854 (2004). 16. C. L. Matson, K. Borelli, S. Jefferies, C. C. Beckner, Jr., E. K. Hege, and M. Lloyd-Hart, “Fast and optimal multiframe blind deconvolution algorithm for high-resolution ground-based imaging of space objects,” Appl. Opt. 48(1), A75–A92 (2009). #260975 Received 13 Mar 2016; revised 6 May 2016; accepted 7 May 2016; published 25 May 2016 © 2016 OSA 30 May 2016 | Vol. 24, No. 11 | DOI:10.1364/OE.24.012116 | OPTICS EXPRESS 12116 17. D. A. Hope and S. M. Jefferies, “Compact multiframe blind deconvolution,” Opt. Lett. 36(6), 867–869 (2011). 18. S. Barraza-Felix and B. R. Frieden, “Regularization of the image division approach to blind deconvolution,” Appl. Opt. 38(11), 2232–2239 (1999). 19. S. M. Jefferies and M. Hart, “Deconvolution from wave front sensing using the frozen flow hypothesis,” Opt. Express 19(3), 1975–1984 (2011). 20. T. Rimmele, “Haleakala turbulence and wind profiles used for adaptive optics performance modeling”, ATST Project Document RPT-0030 (2006). 21. J. M. Conan, L. M. Mugnier, T. Fusco, V. Michau, and G. Rousset, “Myopic deconvolution of adaptive optics images by use of object and point-spread function power spectra,” Appl. Opt. 37(21), 4614–4622 (1998). 22. M. Aubailly and M. A. Vorontsov, “Scintillation resistant wavefront sensing based on multi-aperture phase reconstruction technique,” J. Opt. Soc. Am. A 29(8), 1707–1716 (2012). 23. A. Polo, N. van Marrewijk, S. F. Pereira, and H. P. Urbach, “Sub-aperture phase reconstruction from a Hartmann wave front sensor by phase retrieval method for application in EUV adaptive optics,” Proc. SPIE 8322, 832219 (2012).


Introduction
Speckle imaging is a well-known technique for obtaining high-resolution images of astronomical targets through the Earth's turbulent atmosphere. The technique requires acquiring very short exposures of the target and then using post-detection numerical processing of the imagery to remove the blur caused by turbulence-induced aberrations in the optical wave fronts. The level of blur removal depends on a number of factors including the seeing conditions, wavelength of the observations, size of the telescope aperture and, if adaptive optics (AO) are used, the efficiency of the AO system (which is never perfect: there is always some residual wave-front error). The effectiveness of the removal depends on the level of knowledge available of the system point spread function (PSF). As this knowledge is usually sparse, a popular choice for the numerical restoration technique is multi-frame blind deconvolution (MFBD [1,2]). However, the range of observing conditions over which MFBD performs well is limited to benign to moderate levels of turbulence, which is typically characterized by the ratio D/r 0 , where D is the telescope diameter and r 0 is the Fried parameter that describes the spatial coherence length of the atmospheric wave front [3]. Imagery obtained through strong turbulence (D/r 0 > 40) is usually considered of little value and is typically discarded.
The target of the work reported here is to leverage two observations on the performance of MFBD algorithms in restoring imagery obtained through atmospheric turbulence, to provide a data acquisition/restoration strategy that will allow us to expand our capability for highresolution imaging when the turbulence is strong. The first observation concerns the initial estimates for the object and the PSFs in each image that are required by all MFBD algorithms: performance is sensitive to the quality of these estimates. The second observation on the performance of MFBD is that it deteriorates as the strength of the atmospheric turbulence increases and the image progressively degrades.
Taken together, these observations suggest that when viewing through turbulence that is too strong to allow a successful restoration of the imagery acquired through the full aperture of the optical system, we should consider acquiring data simultaneously through apertures spanning a range of sizes. In this way we can begin by producing a restoration of good quality but low resolution from data obtained with the smallest aperture where the phase distortions characterized by D/r 0 are restricted. We can use that restoration of the target object to seed the restoration of the data from the next larger aperture with more severe aberration. We then continue to apply this bootstrap approach through to the largest aperture, whose data contain the highest spatial frequency information on the target.
One practical way to acquire imagery of a target with a range of D/r 0 values is to partition a single telescope aperture into annuli and form separate images through each annulus. Another is to use multiple telescopes at the same site with different size apertures. Here we will focus on the former.
Partitioning the aperture into concentric annuli has been previously proposed as a way to improve image quality through atmospheric turbulence [4,5]. Calef showed that aperture partitioning significantly outperforms conventional full-aperture imaging in the processing of a small number of image frames with the bi-spectrum method: the improvement in quality is enabled by a reduction in baseline redundancy noise in the partitioned data set [4]. However, the difference between full-and partitioned-aperture imaging was found to evaporate quickly as the number of frames simultaneously processed increased. In addition, Calef found only a minimal benefit to partitioning the aperture when the data were processed with MFBD [6]. Similarly small gains over full-aperture imaging were obtained when aperture partitioning and bi-spectrum restoration were applied to anisoplanatic data taken through mild turbulence [5].
The rest of this paper is organized as follows. In Section 2 we use numerical experiments to illustrate how the performance of MFBD algorithms depend on the initial object and on the strength of the atmospheric turbulence. In Section 3 we describe our approach to modifying MFBD for aperture-diverse data. Details of how we generate simulated data are given in Section 4, and these data are then used in an extensive set of numerical tests in Section 5. Concluding remarks are provided in Section 6.

MFBD performance sensitivities
The performance of MFBD algorithms is observed to depend on initial estimates for the object and the PSFs, and on the strength of the atmospheric turbulence. In this section we illustrate the importance of these two observations.
First consider the initial object and PSFs; performance of MFBD is sensitive to the quality of these estimates, and typically the performance is improved if the initial estimate provided for the object has high-quality low-spatial frequency content over the widest possible range of spatial frequencies. This then facilitates the extraction of the high spatial frequencies, as demonstrated in Fig. 1 which shows the results of two restorations obtained from the same simulated data under different initial conditions. The data modeled an object subtending 8 μrad at a telescope of D = 3.6 m, imaged at a wavelength of 600 nm. The atmospheric coherence length was taken to be r 0 = 9 cm, making D/r 0 = 40. Each restoration used a 17frame data set.
The initial object estimate in the first case was set to the mean data frame; frequencies above the atmospheric seeing limit are severely attenuated. The second started from an image filtered to the diffraction limit of a 1 m aperture, where the low spatial frequencies are rendered much more accurately, but higher frequencies between the cut-offs imposed by a 1 m aperture and a 3.6 m aperture are lost.
Both restorations, working on the same data, therefore start with essentially no knowledge of the object's high spatial frequency content. But Fig. 1 shows that the restoration initialized with the better low spatial frequency estimates yields a result which also has improved estimates of the high frequencies. (We note the initial estimates for the PSF in each case were set to the diffraction limited PSF for the 3.6 m aperture.) This sensitivity to the initial conditions is a consequence of the performance limitations of current optimization algorithms in dealing with problems with thousands of variables. However, even with a poor initial object guess, MFBD does improve the estimate of the object. This behavior of MFBD suggests the use of an iterative restoration process where the object estimate from a first pass through the MFBD algorithm is used as the initial guess for a second pass, and so on. The second observation on the performance of MFBD is that it deteriorates as the strength of the atmospheric turbulence increases and the image progressively degrades. This is clearly demonstrated in Fig. 2 which shows the restoration of imagery obtained over a range of atmospheric turbulence conditions from benign to strong. These observations motivate our proposal to use a boostrap approach, where we begin by producing a restoration of good quality but low resolution from data obtained with the smallest aperture where the phase distortions characterized by D/r 0 are restricted. We then use that restoration of the target object to seed the restoration of the data from the next largeraperture with more severe aberration, and continue in this way through to the largest aperture, whose data contain the highest spatial frequency information on the target. The next section describes in more detail how to this can be efficiently implemented with aperturediverse data.

Aperture diversity MFBD
We use a MFBD algorithm, modified for aperture-diverse data [7], to estimate both the blurfree object and the observed wave-front phases, which provide estimates of the PSFs for each image. The diversity is in the size of the apertures as opposed to their shape [7]. The reduced effect of atmospheric aberration in the smaller apertures is essential to our method. The details of our algorithm are as follows.
In each data channel (i.e. aperture size) we assume that we are dealing with a field-ofview smaller than the isoplanatic angle. In this case we can focus on isoplanatic, incoherent imagery and model the focal plane data, , ( ) x [8]. Here the "hat" symbol denotes an estimated quantity (as opposed to a measured quantity). The real function ( ) ψ x is described using a 2-D pixel basis set. We note that there other ways to enforce positivity on the object (e.g., [9]). For each imaging channel (l) the optical transfer function (OTF) at time (k) is modeled as a using Here B l (u) is a binary pupil support mask, ˆ( ) k u Φ is the estimate of the wave-front phase in the pupil, φ are the phase variables described with a 2-D pixel basis set, η is a Gaussian smoothing kernel, ⊗ denotes the correlation operator, and ⊕ denotes convolution. This model ensures that h(x) is a non-negative, band-limited function [2,9]. We note that use of a binary pupil support mask assumes that atmospheric scintillation is negligible.
The values of ψ(x)and ø(x) are estimated by minimizing the weighted least-squares cost metric.
The data model for each channel is computed as the convolution of the PSF and the object estimate, where the scalar l α accounts for the different photon fluxes in each channel due to the different sizes of the apertures. The weighting function is given by and comprises a signal-dependent Poisson variance term, given by the data model, and a Gaussian variance term, given by n 2 , for the remaining stationary noise in the data [10][11][12]. Using a least-squares approach instead of maximum likelihood is reasonable on two fronts: first, there is evidence that the quality of the restoration is relatively insensitive to the noise model [13] and second, for data corrupted by Poisson noise, the solution provided by least-squares minimization only deviates from that using a maximum-likelihood approach when the signal level is less than 10 counts [14]. For the minimization we use the method of conjugate gradients, terminated when the error metric changes by a fractional amount less than a user-defined tolerance which we typically set to 1 × 10 −5 .
For single channel data (i.e. just one partition), this defines our basic MFBD algorithm. We then perform the restorations in an iterative manner; that is, the object estimate returned by one pass of the algorithm, run to convergence, is used to seed a completely new pass. On the first pass we use the mean data frame and zero phases as our initial estimates for the object and pupil phases, respectively. We set , ( ) 1 (uniform weighting) since the data model is too poor to give a sufficiently accurate estimate of the photon noise. Now at the very beginning of the restoration process we only have the ensemble-averaged frame (or the frame with the least blurring) as our estimate of the object and do not have any information on the wave-front phases. To improve on this situation we perform a "phase annealing" step.
Here we fix the object and perform a set of limited iterations (~5) of the phase updates for a range of sizes for the FWHM of the Gaussian phase smoothing kernel ( ) η . We start with a large value where we can fit 2-3 FWHM of the smoothing kernel across the pupil, and end with a value where we can fit ~10 FWHM across the pupil. This procedure has been found to reliably provide reasonable initial low spatial frequency estimates of the phases: these frequencies play a strong role in defining the gross morphology of the dominant speckle structure in the PSFs. After this initial phase anneal we return to joint estimation of the object and phase variables, and significantly increase the number of iterations for the updates (~1000) and perform additional annealing down to a FWHM value of the phase smoothing kernel that is commensurate with r 0 (or smaller). After convergence of the phase annealing step we perform a "refresh" on the object estimate. That is, we reset the object variables to a constant value (whose sum equals the sum of the mean data frame) and perform an iterative forward deconvolution with the PSF estimates fixed at their values from the end of the previous step. We then select the object estimate at the iteration where the object visually has the best resolution without any evidence of high-frequency artifacts (i.e. we regularize the estimate through truncated iteration). Without this refresh procedure any high-frequency artifacts, that may have appeared in the object estimate during the restoration, are difficult to remove and can inhibit improvement in successive passes.
On the second pass we use the refreshed object as the initial estimate and we use Eq. (4) for , ( ) l k w x with ˆ( ) g x in the denominator fixed at the values from the first iteration, since by now the models for the image frames are a good representation of the data. This implementation of the weighted-Gaussian model in stages is proven [12]. We then reset the initial phase estimates to zero, and update the phase annealing schedule to allow more iterations (~20) while the object is fixed at the beginning of the next iteration of the algorithm. We note that another way of providing an estimate of the signal dependent weights is to use the data in Eq. (4) instead of the model for the data [15,16]. However, although this approach is simpler to implement, in our experience, we have found the weighted-Gaussian model in stages approach to be more robust.  Figure 3 shows a schematic of our restoration process; some terms and abbreviations are described in later sections. The gain from using an iterative (multi-pass) approach to MFBD is highlighted in Fig. 4. As can be seen the technique loses its impact somewhere above D/r 0 = 40.
For aperture diverse data (with more than one channel) we use the basic MFBD algorithm modified for the following bootstrap procedure.
For the first pass we perform a MFBD restoration on the data from the smallest annulus by minimizing Eq. (2) with , ( ) 1 . After convergence we refresh the object estimate as described above. We then introduce the data from the next channel and continue processing with two steps designed to yield good PSF estimates in the new channel. Firstly, the low frequencies of the PSFs are estimated by enforcing the observed spectral ratios (the ratio of the Fourier spectra of two data frames [17,18]), by minimizing the consistency metric This process recognizes that for a stationary object, in the absence of noise, the spectrum of the object cancels out in the ratio of the spectra of any two images, leaving only the ratio of the PSF spectra. As we are only using the spectral ratios to provide reasonable starting estimates for the wave front phases on the next channel, we do not attempt a maximum likelihood solution: instead we set , ', , ' ( ) 1 l l k k w u = at spatial frequencies where the SNR for both the k and k′ frames in the l and l′ channels are greater than some chosen threshold, which we set to 5, and , ', , ' ( ) 0 l l k k w u = otherwise.
Secondly, we continue MFBD iterations with the object fixed while the PSFs are updated using the data in both the first two channels. Spatial frequencies in the OTF of channel l′ with SNR < 5 are now allowed to update. Further, this step refines the high spatial frequencies of the PSF estimates, reducing artifacts caused by noise in the measured spectral ratio data. At the end of this step we perform a full joint estimation of the object and PSFs (via their wave front phases).
The whole process is then repeated at the beginning of each pass through the bootstrap procedure as the next higher resolution data are introduced into the MFBD problem. At the completion of the bootstrap process we find that the data models are a good representation of the data, which allows us as a final step in the process to switch to the Poissonian-Gaussian weighting function [11,12] with ĝ in the denominator fixed at the final values from the bootstrap. We perform a joint estimation of the object and PSFs on all channels simultaneously, initialized with the refreshed object estimate and zero phases, similar to our basic MFBD approach. This last step is repeated until the estimate for the object shows no further change.
At present, our algorithm takes the phases in each channel to be independent of each other, an assumption that is well founded for data taken with multiple separated telescopes. For data from a single segmented pupil, however, this approach is too simplistic. In the latter case we expect some spatio-temporal coherence between the wave fronts across the different segments. We believe that further improvements in our algorithm will be obtained by incorporating the knowledge that the pupil segments are contiguous, but that work remains to be done.

Simulated data
To test our proposed aperture diversity/bootstrap restoration technique, we use numerical simulations of observations of the Hubble Space Telescope (HST) taken through various size apertures for different levels of atmospheric turbulence.
We use a two-layer, frozen flow description of the atmosphere and a Kolmogorov model to generate a time series of PSFs for the different apertures [19]. The wind velocity vectors are commensurate with values observed at Mt. Haleakala, approximately 5 m/s for the lower (ground) layer and 30 m/s for an upper layer at 16 km [20]. The different telescope pupils are all modeled at a scale of 1.5 cm per pixel, this gives 240 pixels across the pupil of the 3.6m telescope. The resulting PSFs, which come from temporally correlated wave fronts, are convolved with a numerical model of the HST on a 512 × 512 pixel grid to provide noise-free models of the observed images, which are oversampled. The photon flux in each image is scaled to the aperture area, with the total in the 3 images from the multi-aperture data set equal to the flux in the single image from the 3.6 m filled aperture. Since it is not fundamental, we do not include read noise from the detectors, assuming that devices such as noiseless EMCCDs will be used. Poisson noise is however added to the images to simulate the observed data. For photometric purposes, the target is assumed to have a brightness of m v = + 2 observed through a filter centered at 0.6 μm with 50% bandwidth. This corresponds to 3.0 × 10 7 photons/image for the full 3.6 m aperture in a 5 ms integration [19]. For the present work, we do not model chromatic effects in the PSF, instead assuming monochromatic imaging. While the detailed PSF structure will certainly depend on the optical bandwidth, that is the subject of future work.
We note that the temporal correlation in the wave fronts used to generate the PSFs reduces the diversity in their morphology in a small ensemble. This in turn reduces the leverage in the MFBD restoration process that comes from using multiple frames. This makes the restoration problem significantly more challenging than when the PSFs are not temporally correlated, the case most often described in the literature for MFBD restoration of imagery acquired through atmospheric turbulence.

Tests of proposed approach
We examine three different experiments to demonstrate the benefit of aperture-diverse imaging over traditional full aperture imaging. Each experiment simulates the acquisition of speckle data under different conditions: from moderate turbulence using multiple telescopes, to strong turbulence with a single telescope, to extreme turbulence using a single telescope equipped with a wave front sensor. We will first look at the results from the single telescope experiments and finish with those from the multi-telescope experiment.

Observing through strong turbulence using a single telescope
Here we simulate imagery acquired using a 3.6 m telescope with a 0.4 m secondary obscuration observing through an atmosphere with r 0 = 9 cm. At D/r 0 = 40 the simulated data are therefore at the limit of the regime of turbulence strength that classic MFBD techniques can address, as illustrated in Fig. 2, and are beyond the reach of most adaptive optics systems.
In particular, we simulate observations made with the full pupil and with three annular apertures with outer diameters of 1.0 m, 1.6 m and 3.6 m, as shown in Fig. 5. The outer diameters of the two inner annuli were selected to match available telescopes at the Maui Space Surveillance Complex on Mt. Haleakala, since our analysis is applicable to the case of distributed as well as annular apertures (see §5.3).
The object estimates for the two different restorations, from single full-aperture data and from all three partitioned apertures using the bootstrap technique, are shown in Fig. 6. The gain in image quality for the proposed approach is clear: the RMSE improves from 0.34 in the single-aperture case to 0.29 from the partitioned apertures.

Observing through extreme turbulence using a single telescope equipped with a wave front sensor
A useful extension of the basic aperture diverse MFBD approach is available when the imaging system is equipped with a Shack-Hartmann wave-front sensor (WFS). Jefferies and Hart [19] have demonstrated that for large values of D/r 0 a significant increase in imaging performance can be obtained through incorporating WFS data into the MFBD restoration process, in effect making the process "myopic" rather than "blind" [21]. Here we note that if the Shack-Hartmann is configured to provide a critically sampled image of the object through each sub-aperture, rather than the under-sampled images typically made by a Shack-Hartmann sensor, then we can introduce an additional set of smaller apertures into our aperture diversity setup while maintaining wave-front sensing capability.
The MFBD restoration of the sub-aperture images provides information on both the wave fronts in each of the sub-apertures (beyond simple tip-tilt) in the pupil -and thus the wave front across the whole pupil -and the lowest spatial frequencies of the object. These data now provide the first step in the bootstrap process. We note that this approach to estimating the wave-front phase is an extension of the multi-aperture phase retrieval (MAPR) technique [22,23] to extended sources. Having access to wave-front sensor data means that we can move to a multi-layer model for the atmosphere and can model temporal correlations in the data using the Taylor frozen flow hypothesis [19]. That is, we replace where the summation is over the phase delays caused by each layer i at time k, i v is the velocity of the layer and k Δ is the time between consecutive data frames. An overview of the restoration process used when wave-front sensor data are available is shown in Fig. 3.
In our second simulation we split the photon flux equally between two channels, one to a "science" camera and one to an imaging Shack-Hartmann WFS (i.e. in a MAPR configuration). We explored two scenarios, a full pupil and an aperture diverse system, where the full pupil is partitioned into three annuli with the same dimensions as in the previous simulation. The Shack-Hartmann WFS had six sub-apertures across the full pupil (32 subapertures in total, each of size 0.6 m). Figure 7 illustrates the appearance of the images seen through the WFS. In addition, since the availability of WFS data provides us with knowledge of the wind vectors of the different layers of turbulence [19], we apply the frozen flow approximation to model the flow of atmospheric phase errors over the contiguous apertures in all the channels. The reconstruction of the phase across the full pupil through the analysis of the sub-aperture data yields the same information as the spectral ratios during the bootstrap process, and so that metric in Eq. (5) is not used here.
When using the imaging Shack-Hartmann sub-aperture data we generated a low-spatial frequency estimate of the wavefront phases from the measured centroids of the image data [22]. We then used a "compact" multi-frame blind deconvolution approach [17], and selected a sub-set of the data that provided a Fourier plane coverage of 90% of the set of spatial frequencies measured by the sub-apertures to provide the first estimate of the object. This object estimate was then held fixed and the metric in Eq. (2) was minimized using the remaining sub-aperture data to produce phase estimates across the entire pupil using the FFM. Finally, in the last step of the annealing schedule, we performed joint estimation of the object and wavefront phases. We found this approach to be more robust than initially minimizing over the entire WFS sub-aperture data. At the end of this step, the data from the partitioned annuli were then added into the restoration using the bootstrap technique described above. Because inclusion of WFS data into the image restoration process allows restoration of data at higher levels of turbulence [19], we increase the level of turbulence to D/r 0 = 70 for this second simulation (r 0~5 cm and D = 3.6 m). The results from the restoration of the science data augmented with simultaneous WFS data as an additional constraint are shown in Fig. 8. Again, the restoration using the aperture diverse set of images and a bootstrap approach is better than that obtained using a traditional set of full pupil images. The RMSE is significantly closer to that for the ideal diffractionlimited image. Together, the results in Figs. 6 and 8 validate the benefit of partitioning the aperture into annuli and using a bootstrap approach to the restoration process

Observing through moderate turbulence using multiple telescopes
This experiment was conducted to both validate the use of multi-aperture imaging for the case of multiple telescopes and to assess if any improvement was to be afforded when imaging through conditions where traditional full pupil imaging and image restoration via MFBD was already providing a reasonable level of image quality.
Here we simulate observations of the Hubble Space Telescope acquired simultaneously through atmospheric turbulence with r 0~1 5 cm, using three telescopes with apertures of 1.0 m, 1.6 m and 3.6 m, all with secondary obscurations of 0.4 m. Photometry was modeled in the manner described in Section 4. For the largest aperture, this provides a turbulence level of D/r 0 = 24. As shown in Fig. 4, at this level of turbulence, even though the morphology of the target is not discernable in the raw data, we can still expect a restoration of moderately good quality using a 3.6 m aperture with MFBD processing of the imagery. As above, our selection of aperture sizes for the experiment is motivated by the suite of telescopes available at the summit of Mount Haleakala.
MFBD restorations were computed from the data for all of the telescopes individually, and for the telescope data sets jointly, with and without bootstrapping. Each restoration used 12 ms of data, comprising 6 exposures of 2 ms. We use the RMSE averaged over twenty different realizations of the data sets to measure the quality of the restorations as a function of spatial frequency. For this case we define the RMSE as the root-mean-square difference between the true and estimated object power spectra, azimuthally averaged in the Fourier plane. Figure 9 shows results for the three treatments of the data. When the telescopes are taken separately, Fig. 9(a), we find the low-spatial frequencies are best estimated from the data acquired with the smaller apertures, even though they collect significantly less light than the largest aperture. Moreover, as we move from low-to high-spatial frequencies, the lowest RMSE value is provided first by the 1.0 m aperture then the 1.6 m aperture: the RMSE values from the restorations of the 3.6 m aperture data are large at all spatial frequencies. The RMSE of restorations using the data from all three annular apertures. The solid line represents the simultaneous restoration of the combined data. The dotted line shows the result from the bootstrap approach that starts with the smallest aperture data to achieve a good low-spatial frequency estimate, and then systematically updates the estimate through the addition of data from the larger apertures.
which is very similar to the 3.6 m result in Fig. 9(a) and is consistent with the finding of Calef [6]. In contrast, if the data sets are processed in series from smallest to largest aperture, following the bootstrap prescription in §3, the RMSE is substantially reduced at all spatial frequencies. This directly translates into improved imagery and validates the use of a bootstrapping approach for processing imagery acquired simultaneously using multiple telescopes.

Conclusions
Using extensive numerical simulations we have shown that imaging using aperture diversity, where the diversity is in the size of the apertures, and a bootstrap approach to the numerical post-processing, produces a higher quality image than can be obtained through the postprocessing of data acquired with a single full aperture. The improvement from the bootstrap is evident with or without the inclusion of WFS data.
This extension in the range of turbulence conditions through which high-quality imagery can be obtained translates directly into increased spatial and temporal coverage of the sky: an important consideration given the cost of large telescopes. By moving from traditional speckle imaging using the full pupil of a 3.6 m telescope to speckle imaging using an aperture diverse data set that includes imaging Shack-Hartmann data, the reach of a high-performance imaging system would be improved from D/r 0 ≤ 40 to D/r 0 ≤ 70. At Haleakala, typical nighttime values of r 0 at zenith are around 15 cm [20]. At zenith angle ζ, r 0 scales as (cos ζ) 3/5 , implying that the 3.6 m AEOS telescope could extend routine observations from zenith angles of 65° down to 80°, almost doubling the accessible solid angle. During the day, seeing conditions will be substantially worse because of surface turbulence driven by diurnal heating of the ground. Daylight observing, especially important if the telescope's mission is to characterize low Earth orbit satelites, will therefore benefit from an even more significant fractional improvement in sky access.
Finally, we note that the results presented in this paper suggest a potential paradigm shift in how we image through atmospheric turbulence. No longer should image acquisition and post processing be treated as two independent processes: they should be considered as intimately related.