Unsighted ghost imaging for objects completely hidden inside turbid media

Ghost imaging (GI) is an unconventional imaging method that retrieves the image of an object by correlating a series of known illumination patterns with the total reflected (or transmitted) intensity. However, the patterns on the object are required to be known, which highly limits its application scenarios, especially in a strong scattering environment. We here propose a scheme that removes this basic requirement, and enables GI to non-invasively image objects through turbid media. As experimental proof, we project a set of patterns towards an object hidden inside turbid media that make the patterns falling on the object completely unknown. The spatial information of both the object and the illumination is lost. We prove that, when the source is within a memory-effect angular range of the turbid medium, the spatial frequency of the object is preserved in the correlation of GI, which can be used for image reconstruction. This scheme also circumvents the major challenge in non-invasive imaging through turbid media: the object must be small enough to fit in a field-of-view which is usually extremely small in realistic scenarios. Our method removes this limitation and is an important step towards realistic applications.


Introduction
Different than the conventional imaging methods that rely on the first-order interference (typically using lenses), ghost imaging (GI) exploits the second-order correlation to reconstruct an image, bringing advantages such as better resistance to turbulence [1], high detection sensitivity [2,3], lensless imaging capability [4], and broad adaptability for different scenarios [5][6][7]. Therefore, GI has drawn a lot of attention during the past two decades, invoking a lot potential applications in various fields ranging from optical imaging [8], x-ray imaging [9][10][11], to atomic sensing [12,13]. A typical GI setup consists of a test and a reference arms. In the test arm, an object is illuminated by a temporally and spatially changing light field. The reflected or transmitted light from the object is collected by a bucket detector with no spatial-resolving capability. The reference arm is used to measure the variance of the light field on a conjugate plane of the object plane, i.e., the illumination patterns. The correlation between the patterns and the bucket signal results in the image of the object. Computational GI is a setup that removes the reference arm. Instead, it pre-calculates or pre-determines the light patterns on the object plane. Since the bucket detector is simply used for collecting all the light from the object, the test arm is resistant to turbulence or strong light scattering. Most works focused on investigating GI with a turbid medium placed between an object and a bucket detector [1,[14][15][16]. GI requires the light patterns illuminating the object be well determined. If the illumination patterns are disturbed by weak scattering, the reconstruction quality will be degraded [17,18]. If a strong-scattering medium placed between the source and the object totally scrambles the patterns, GI will fail to recover an image [1,19]. Although there was an approach that used GI configuration to image objects hidden behind turbid media, it exploited the non-Gaussian correlation which results in a resolution of the same order as the distance from the front surface of a turbid medium to the object (the width of the medium plus the distance between object to the medium) [20]. If this distance is several centimeters, it could not resolve structures smaller than a centimeter, limiting its applications. This is different from a typical GI scheme which is based on Gaussian statistics and whose resolution depends on the grain size of the speckle patterns on the object [21,22].
We here propose a new scheme that allows one to perform GI without knowing the illumination patterns on the object. The imaging resolution still obeys the same rule in GI: it is determined by the average grain size of the illumination patterns. This scheme enables GI to non-invasively image an object completely hidden inside opaque media. In the next section, we demonstrate a GI experiment with an opaque medium presented between the light source and an object, which highly scrambles the illumination light and makes the patterns falling on the object entirely indeterminable. The correlation of the preset illumination patterns and the bucket detections does not reveal the image any more. However, we reveal that the correlation of GI preserves the spatial frequency of the object, even after the real-space information of both the object and the illumination is completely lost. This is the key for GI to recover images from highly scattered light when turbid media entirely surround an object. We provide both experimental and theoretical proofs.
Moreover, our method circumvents the challenge of a limited field of view (FOV) on the object plane, which has hindered the application of scattering imaging [23,24]. Each point of the object emits light that is scattered by the turbid medium and forms a speckle pattern on a detection plane. With an incoherent source, the pattern detected by a camera is a superposition of these speckle patterns from all the points of the object. Memory effect defines a solid angle from a central point of the turbid medium to the scene, i.e., a FOV. Light propagating from any point within a FOV to a detection plane shares the same point-spread-function (PSF). Any two patterns originating from two points within a FOV are highly correlated, while those from two points outside a FOV are uncorrelated [25]. Therefore, if the object is too big to fit in a FOV, the detected pattern loses the information of the correlation and flattens out, making it difficult in image retrieval. Unfortunately, the FOV decreases along with the thickness of the turbid medium increasing. In reality, a FOV is usually extremely small, making the previous scattering imaging methods only work for extremely small scenes. Our method, however, removes the requirement of the size of objects. It has only a size limitation for the light source, which can be easily satisfied (please see discussion for details). This feature provides an important key to apply scattering imaging to realistic scenarios.

Experiment
As depicted in figure 1, a GI system is employed to image a hidden object between diffusers D1 & D2. The LED emits light at a 633 nm with a bandwidth of 15 nm. The diffuser is a piece of 100-grit ground glass. We used 1024 × 1024 cells of the DMD to project ∼10 6 preset patterns, {M j (ρ)}, towards the object, while the corresponding bucket signals were successively recorded. {M j (ρ)} is a set of random patterns with '1' and '0'. Since the sparsity of the source will affect the contrast of the speckle pattern on the object plane, only N pixels are '1' in each pattern, where N is a random integer smaller than 20. Note that, the bucket detector should be placed behind D2 for a transmissive object. In a typical GI setup, i.e., when there is no diffuser D1 or its scattering is very weak, the patterns on the object are similar to {M j (ρ)}. The correlation between {M j (ρ)} and the corresponding bucket signal {B j } will yield the image of the object, i.e., When the diffuser D1 is placed between the light source and the object, every projecting pattern is highly scrambled, forming a random speckle-like illuminating pattern on the object plane, denoted as {P j (r)}.
{P j (r)} are completely different from {M j (ρ)} and indeterminable, see the inset of figure 1. The correlation C(r) no longer yields the image but a random-like pattern (see figure 1(b)). However, as long as the size of the light source is small enough (satisfying the isoplanatic condition for the scattering layers), the Fourier magnitude of C(r), denoted as |C(u)|, exhibits the spatial frequency of the object (see figure 1(c)). Applying a phase retrieval algorithm combination of the hybrid input-output (HIO) and the error reduction (ER) algorithms [26], we eventually reconstructed the image of the object from |C(u)|, as shown in figure 1(d).
A theoretical interpretation is given in the following section. It is worthwhile to mention the following experimental conditions. (1) The lens makes an image of the DMD to a plane between the lens and the diffuser, which is 39 mm away from the lens and 211 mm away from the diffuser. Light from each pixel of DMD will make a spot of more than 100 mm wide on the diffuser's surface. Thus, within the aperture (6 mm wide), all the light from different pixels mixes together and generates a speckle-like pattern behind the diffuser. Different patterns of the DMD induce different random patterns on the object plane. The next section shows that a random-like pattern P j (r) is a convolution of the DMD's pattern M j (ρ) and the PSF of the system. (2) The bucket detector is placed off the center, so that it only receives the light reflected back from the object and avoids the light that is reflected from the diffuser before transmitted through it. (3) If the spectrum of the source is too broad, decorrelated patterns generated by scattering of wavefronts at different frequencies will mix at the object plane, forming smeared speckles. From the view of temporal coherence, the temporal coherence length of the source needs to be longer than a value that is proportional to the transport mean free path of the turbid medium [23]. In the experiment, the bandwidth of the LED is 15 nm that works well with the diffuser we used.

Theory
In the following, we provide a general theoretical analysis for GI system with incoherent source in terms of PSFs. If the size of the light source satisfies the isoplanatic condition, i.e., it is within the memory effect range with respective to the scattering medium (D1 in figure 1), the PSF from the DMD plane to the object plane, S MO (r − ρ), is shift-invariant [27]. The resulting illumination pattern on the object plane can be described by the linear system theory, which is written as the convolution of the source and the PSF, i.e., P j (r) = [M * j S MO ](r). The bucket detection can be formulated as: Since the spatial information of the illumination on the object is lost due to scattering, the correlation between {P j (r)} and the corresponding bucket signals cannot be established. Instead, we analyze the correlation between the preset patterns on the DMD and the bucket signals: Here, δ D is a peak function with a width that represents the resolution of {M j (ρ)} (i.e., the pixel size of the DMD). It needs to be noticed that the scattering medium must stay static during the measurement, otherwise the PSF will change during the imaging process. Taking the Fourier transform of both sides, we obtainC where the tilde denotes the two-dimensional Fourier transform, u is the spatial-frequency coordinate vector. S MO is called the optical-transfer function (OTF). The Fourier magnitude form of equation (4) is |S MO (u)| is the magnitude of the OTF (MTF). Note thatC(u) andδ D (u) can be easily determined. As long as |S MO (u)| is predictable, the spatial frequency of the object, |Õ(u)|, can be resolved. Equations (2)-(5) work for a general GI system with incoherent source. In the following, we discuss two cases without or with turbid media, respectively, in which the property of |S MO (u)| can be predicted: Case 1. Without the turbid medium (namely D1 is removed). Here, the patterns on the DMD can be precisely projected on the object with the lens. In a lens system,S MO is determined only by the pupil function of the lens,S MO ∝ T  − ρ). Here, ξ is the coordinate vector of an arbitrary transvers plane located in between the lens and the object. S L (ξ − ρ) is the PSF of the lens system from the DMD plane to ξ's plane. S S (r − ξ) is the PSF of the scattering system from the ξ's plane to the object plane. The Fourier magnitude form is, |S S | plays the same role as |S L | does, i.e., limits the range of the spatial frequency. The product of the three MTFs, F(u) ≡ |S L (u)| · |S S (u)| · |δ D (u)|, acts as an overall spatial filter which cuts off the high spatial frequency, defining the resolution of the image. Based on the above analysis we can see that, |C(u)| = |Õ(u)| · F(u) is the Fourier magnitude of a diffraction-limited image. When the scattering system contains sufficient random scatterers to satisfy the ergodic-like condition [24,25], where δ G is a delta-like peak, representing the zero frequency relating to the background of the illumination. T S represents the squared modulus of the aperture function right in front of the D1. With an even light intensity attenuation over the circular-shape diffuser (as in our experiment), |S S (u)| has a gentle slope within the spatial-frequency range of [24]. |S S | works as a spatial frequency filter as |S L | and |δ D | do. In our experiment, the resolution of the image is mainly limited by |S S |, i.e., ∼ λZ O /D ≈ 30 μm. Note that the resolution caused by |S L | is ∼ λf/L ≈ 0.6 μm (L is the size of the lens) and δ D ≈ 7.4 μm. The missing Fourier phase of the image can be restored from |C(u)| using a phase retrieval algorithm, such as HIO or ER [26]. The image is then reconstructed.

Discussion
We have demonstrated a GI experiment without knowing the light patterns illuminating on an object. Even if the object is hidden entirely inside strong-scattering media, the image can still be recovered as long as the light source is small enough. To interpret this imaging scheme, we provide a theoretical analysis for a general GI system with incoherent light in terms of PSFs. In the presence of a turbid medium between the source and the object, the correlation measurement of GI resolves the spatial frequency of the object (instead of an image of the object), from which a diffraction-limited image can be deconvolved with a phase retrieval algorithm, without knowing the spatial information of the PSF and the patterns falling on the object.
There is a major challenge in the field of scattering imaging that has been hindering the development of its applications: the size of an object must be small enough to fit in a FOV (the FOV is the opening angle from the turbid medium towards the memory effect range at the object plane). The FOV is inversely proportional to the thickness of the turbid medium. In a realistic scenario, the FOV is usually extremely small, making the traditional scattering imaging methods hard to apply. In contrast, our method does not have size limitation for the object. Instead, it requires that the light source is within the memory effect range with respect to the turbid medium (D1). We call it source-side isoplanatic condition, in order to differentiate it from the object-side isoplanatic condition in the previous methods [23,24]. This feature makes our method very practical. Because in non-invasive scenarios, we cannot change or influence an object in size or other properties, but we can manipulate our own light sources and make the system fulfill the source-side isoplanatic condition: one can either physically shrink the size of the source, or move the source further away from the diffuser, or employ lenses to produce a smaller virtual source. As in our experiment, we used a lens to form a smaller virtual source of ∼4 mm (the size of the source is originally ∼7 mm) at a location 210 mm away from D1, where the memory effect range is ∼10 mm on source-side.
This method enables GI to see objects hidden behind turbid media, and provides a different perspective to GI. It may also inspire a solution to circumvent the challenge of the limited FOV in scattering imaging.