Intensity correlation holography for remote phase sensing and 3D imaging

Holography is an established technique for measuring the wavefront of optical signals through interferometric combination with a reference wave. Conventionally the integration time of a hologram is limited by the interferometer coherence time, thus making it challenging to prepare holograms of remote objects, especially using weak illumination. Here, we circumvent this limitation by using intensity correlation interferometry. Although the exposure time of individual holograms must be shorter than the interferometer coherence time, we show that any number of randomly phase-shifted holograms can be combined into a single intensity-correlation hologram. In a proof-of-principle experiment, we use this technique to perform phase imaging and 3D reconstruction of an object at a ~3m distance using weak illumination and without active phase stabilization.


Introduction
Determining the phase distribution of a wave from a set of intensity measurements is a fundamental problem in physics.In optics, it is a key component in various imaging techniques, including quantitative phase imaging, holography, and ptychography [1][2][3][4].It also finds applications in astronomy and free-space communications, where phase aberrations in a wavefront can be corrected using adaptive optics [5].
Optical wavefronts are generally characterized using some sort of interferometric technique.Self-referencing techniques, such as wavefront sensors [6,7], shear plates [8], and "direct" methods [9], either interfere the signal wave with itself or perform local calibrated phase transformations.These have less stringent stability requirements, but can exhibit limitations such as reduced spatial resolution or ambiguities around amplitude null points [10].Alternatively, holographic techniques involve interfering the signal with an external reference wave.The use of a reference introduces stability requirements, but alleviates null point issues and can enhance the measurement signal-to-noise ratio (SNR) through heterodyne gain [11,12].
Conventionally, a hologram is obtained in the following manner.A single light source is split into two paths within an interferometer.One path prepares the signal wave, e.g. by illuminating an object, while the other path serves as the reference.The two paths are then recombined, and the hologram is recorded by measuring the amplitude (first-order) interference pattern, also known as the interferogram.The phase distribution of the signal wave is typically retrieved from multiple interferograms acquired with varying phase shifts by tuning the interferometer path length [13] or using an off-axis configuration [14].However, these approaches require that the interferometer is stable for the entirety of the measurement.Noise such as vibrations limits the hologram integration time, which in turns limits the precision of the retrieved phase.Increasing the illuminating power can enhance the precision, but this approach may not always be suitable [15,16] .For example, in remote holography, diffuse reflection and poor interferometric stability necessitate the use of high-energy pulsed lasers [17].Moreover, one may be constrained to using low powers when imaging samples susceptible to photodamage [18][19][20], particularly with shorter wavelengths.
While the above techniques are based on amplitude interference, it is also possible to perform holography using intensity (second-order) interference by measuring intensity correlations between the signal and reference fields [21][22][23][24][25][26].Such correlations can reveal coherence properties in stochastically-varying fields [27].In particular, an intensity-correlation hologram can be made insensitive to temporal fluctuations in the phase offset between the signal and reference, provided that the correlation time window is sufficiently short.Thus, the hologram can be integrated for arbitrarily long without having to stabilize the interferometer.This idea was demonstrated in Refs.[28,29] using an intensified CMOS camera to perform holography at the single-photon level.In this work, we show that this idea can also be achieved in a conventional holography setup.Intensity correlations are extracted from interferograms captured by a standard frame-based camera.Although the exposure time of each individual frame must be shorter than the interferometer coherence time, an arbitrarily large number of randomly phase-shifted interferograms can be combined into a single intensity-correlation hologram, thereby enabling integration times beyond the coherence time.The signal wavefront can be then be retrieved using a principal component analysis on the intensity correlation hologram.Building on Refs.[28,29], our work presents two main novel results.Firstly, we show that the technique works even when the interferograms are dominated by camera readout noise, thus making it applicable to a broad range of cameras.Secondly, we use digital holographic interferometry to perform 3D imaging of a scattering object located at a distance of ∼3 m.By eliminating the need to actively phase-lock the interferometer, our technique facilitates holography applications restricted to a weak photon flux or a short coherence time, such as phase-sensitive microscopy of fragile samples or sensing of remote objects.

Concept
The signal and reference waves are combined and a camera captures a time series of interferograms {   ()}, where  is a frame index.Each interferogram is described by where  = (, ) is the pixel coordinate, () is the modulation depth of the interference pattern which depends on the signal and reference intensities, while () is a background term which includes readout and dark current noise from the camera.The quantity () is a temporally-fixed spatial phase and is the central quantity of interest.It describes the relative local phase variation between the signal and reference wavefronts, () =  sig () −  ref ().We assume that the reference has a spatially-uniform wavefront ( ref () = 0) such that () is simply the wavefront of the signal wave.In contrast,   is a global phase offset between the signal and reference waves, which fluctuates between [0, 2) on a timescale of   , the interferometer coherence time.These fluctuations are typically caused by changes in the interferometer path lengths due to vibrations and other environmental noise.Importantly, the camera exposure time   must be shorter than   to ensure that   is fixed in each interferogram.In the low-light limit, such as when an object is far away, we need to combine many interferograms in order to achieve a sufficient SNR.However, since   is randomly varying, we cannot simply add these intereferograms together as    () would "wash-away" the interference term.While there are a number of algorithmic techniques [30][31][32][33][34] to retrieve () from a series of randomly-shifted interferograms, these approaches are not the most efficient solution to the problem.Instead, here we propose measuring intensity correlations between pixels in the interferograms rather than only the intensities of individual pixels.As we show mathematically below, this allows us to combine the interferograms in a way that is intensive to drifts in global phase.Intensity correlations between pairs of pixels are given by: The bracket ⟨⟩ denotes an average over inteferograms (i.e.frames).We used the trigonometric identity cos , and the fact that ⟨cos [() +   ]⟩ = 0 since   is randomly drifting between [0, 2) between interferograms.In quantum optics, Eq. ( 2) is known as Glauber's second-order cross-correlation function.It describes intensity correlations between the signal and reference waves.Due to the factor of 1/2 in the modulation term, the visibility of intensity interference is limited to 50% [35].The benefit is that ⟨ (,  ′ )⟩ does not depend on the phase offsets   , but does depend on the quantity of interest, ().This key fact enables one to measure ⟨ (,  ′ )⟩ with arbitrarily long integration times without having to stabilize the interferometer.The intensity-correlation hologram ⟨ (,  ′ )⟩ can be measured with an event-based camera by making a histogram of pairs of pixels that fired within   [28], or with a frame-based camera by combining frames captured with an exposure time   <   , as we show experimentally below.Intensity-correlation holography [Eq.( 2)] presents a different phase-retrieval problem than Eq. ( 1): (i) there is a single input to the problem, which is the intensity correlation function averaged over all interferograms, (ii) the problem is self-referencing, since the phase offsets are now given by ( ′ ), the quantity of interest, rather than an external   .Using a trigonometric identity, we can re-write Eq. ( 2) as where If the phase distribution () takes on values evenly distributed between [0, ), for any integer , then and the components in Eq. ( 4) are approximately orthogonal.This approximation holds when the inteferograms contain both destructive and constructive interference in different parts of the image, e.g. a fringe pattern.In this case, the components in Eq. ( 4) can be determined via an eigendecomposition of ⟨ (,  ′ )⟩.They can also be determined by performing a principal component analysis (PCA) of a data matrix containing all the interferograms.If we define  = [  1 , ...,   ], where   is a column vector corresponding to the flattened th interferogram [Eq.( 1)], then  =   is equivalent to ⟨ (,  ′ )⟩ (where  denotes the transpose).The principal components of  (and thus the eigenvectors of ) can be found directly by performing a singular value decomposition (SVD) of .The phase can then be retrieved using 1. Define  = [  1 , ...,   ] where   is a column vector corresponding to the flattened  th interferogram.
2. Perform a singular value decomposition of  such that  =   .
4. Unflatten the retrieved phase . as outlined in Table 1.
Our procedure is related to the phase-retrieval algorithm presented in Ref. [33].That work considers an eigendecomposition of the intensity correlation matrix averaged over pixels (i.e.correlations between frames), whereas our method considers an eigendecomposition of the intensity correlation matrix averaged over frames (i.e.correlations between pixels).The more computationally efficient method depends on how  compares in size to   ×   , i.e. performing eigendecomposition of   (size     ×     ) or    (size  × ).Since only the first few eigenvectors with the largest eigenvalues are needed, these can be efficiently found using partial SVD methods such as "locally optimal block preconditioned conjugate gradient" (LOBPCG) directly on the matrix , without having to construct either correlation matrix.We used the Python implementation scipy.sparse.linalg.svds.In contrast to Ref. [33], we do not perform any background subtraction before performing the PCA.Instead, we use the PCA itself to filter the background term  bg as we found this approach improves the accuracy of the retrieved phase when the frames are dominated by read noise.In practice,  bg has the largest singular value due to noise and non-perfect interference visibility.Then,  cos and  sin have the second and third largest singular values.The relative ordering between these two components is a priori not known and so () is retrieved up to a global offset.

Reflective object: phase imaging of a spatial light modulator
We test the phase retrieval algorithm using the experimental setup shown in Fig. 1(a).The light source is a continuous-wave laser with a 532 nm central wavelength and > 300 m coherence length (CrystaLaser CL532-100-S).We use a spatial light modulator (SLM, Meadowlarks E-Series 1920×1200) as a reflective phase object.The SLM can be programmed to apply an arbitrary phase mask () within [0, 2) to the signal beam.An objective lens (   = 3 mm) is used to expand the illuminating beam, while a second collimating lens (   = 300 mm) is placed in the reference path to ensure that the reference has a flat wavefront.The interferometer is purposely made large ( obj = 3.14 m) to mimic a remote phase sensing scenario.Its coherence time is approximately   ∼ 10 ms.A telescope lens (  tele = 1000 mm, diameter 50.8 mm) images the SLM plane onto the camera (UI-3240CP-NIR-GL).We record interferograms of size   =   = 230 pixels (pitch  = 5.3 m), corresponding to a size of 2.4 mm × 2.4 mm in the SLM plane.The camera has a maximum frame rate of  = 60 frames-per-second, quantum efficiency of ∼ 70%, and readout noise of 88(1) electrons/pixel (root-mean-square).Given that we use exposure times of   ≤ 1 ms, the total hologram acquisition time  is mainly determined by the deadtime between frames, i.e.  ∼ /.A bandpass filter (Semrock LL01-532) is used to reduce the contribution of room lights to the background levels.
We begin by testing the algorithm using   = 0.2 ms and a relatively high power of 3.4 W for Remarkably, when we lower the power to 15.6 nW such that each interferogram contains significant readout noise (SNR ∼ 1.25), we still retrieve () with similar accuracy [bottom row, Fig. 1(b)].This suggests that the quality of the retrieved phase is limited by systematic errors such as the SLM calibration rather than the SNR of each interferogram.The PCA effectively isolates signal intensity correlations from the noise, as shown by the singular value distribution in Fig. 1(c).We also compare our approach to the advanced iterative algorithm (AIA) of Ref. [31].
While the AIA algorithm is able to retrieve the phase from the  interferograms, the PCA algorithm shows better performance.It is also much faster, taking only 1 s on a regular laptop whereas the AIA takes about 100 s.Next, we fix the combined signal and reference power to 5.7 nW, resulting in approximately 200 signal electrons/pixel/ms.We then vary   and quantify the PCA performance by computing the normalized mean square error (NMSE) between the experimentally retrieved phase pattern  exp () and the applied phase pattern  th (): where N =  | random () −  th ()| 2 is the mean squared error for a random phase pattern  random ().The NMSE measures the accuracy of the retrieved phase relative to a random guess (NMSE = 1).The results are shown in Fig. 2. We fit the NMSE to √︁ (/  ) 2 +  2 , which adds two noise terms in quadrature.The first term, /  , models noise which reduces with the number of frames , such as shot-noise ( = 1/2).The second term, , models noise that is constant with , such as readout noise, dark noise, and other systematic errors.For   = 30 s [blue curve], the interferograms are dominated by readout noise.We find  = 0.09(2) (uncertainty is one standard deviation), and thus the phase-retrieval accuracy only benefits slightly from increasing .For   = 170 s [orange curve], the frames still contain significant readout noise, and we find  = 0.43 (1).This performance nearly scales with shot-noise, which suggests that the PCA algorithm is effectively eliminating the camera noise from the interferograms.Finally, for   = 1 ms [green curve], the signal exceeds the camera noise level and we obtain  = 0.49(4), in close agreement with shot-noise.

Scattering object: 3D imaging of a dice
We also investigate the phase-retrieval algorithm on interferograms obtained using a scattering object.As shown in Fig. 3(a), a 15 mm × 15 mm × 15 mm dice is mounted on a platform that is placed  obj = 3.97 m away from a telescope lens with  tele = 500 mm and a diameter of 50.8 mm (NA ∼ 0.05).Using 125 mW to illuminate the object, we collect roughly 1 mW of scattered light.A neutral density filter is used in the reference path to attenuate the reference to a similar power level.All results below are obtained using  = 200 frames and a   = 3 ms exposure time.Due to the maximum frame rate of the camera (60 frames per second), the total hologram acquisition time is  ∼ 3.3 s.We first reconstruct  1 = √︁  (, )   1 ( ,) when the object is illuminated at an angle .The phase  1 (, ) is determined using the algorithm outlined in Table 1, whereas  (, ) is measured by blocking the reference beam.Both quantities display a speckle pattern [Fig.3(b)].While the speckles may look like noise, they are in fact real features in  1 resulting from light scattering from the rough surface of the dice.We demonstrate the coherence of the speckle pattern using digital holographic interferometry (DHI) [1].We tilt the bottom mirror by a small amount Δ along the  direction, and again reconstruct the field  2 = √︁  (, )   2 ( ,) [Fig.3(c)].Taking the difference between the reconstructed fields, | 1 −  2 | 2 , reveals an interference pattern [Fig.3(d)].This operation cancels out the phase shifts in the scattered wavefronts resulting from the rough dice surface.Thus, only the phase shifts due to the path length difference between the illuminating beams remain.The cancellation is not perfect due to errors in the reconstructed fields, which results in residual speckle noise in the interference pattern.
A salient feature of DHI is that the phase difference Δ =  2 −  1 can be related to the three-dimensional shape of the scattering object [36][37][38][39].Scattering from a flat surface, we would expect this phase difference to be 2/ 0 , where  is the transverse spatial coordinate on the camera, and  0 is the fringe period given by for small Δ.However, if the illuminating beam propagates an additional distance  along the  axis due to the presence of an object, it causes the countour fringes to shift by  =  tan() ≈ , leading to a phase shift of  = 2/ 0 = Δ − 2/ 0 .This shift is evident near the bottom of Fig. 3(d), where there is a discontinuity in the fringes due to the gap between the dice and the edge of the platform on which the dice is mounted.There is also a slight chirp in the fringes along the dice faces due to their tilt.We can map the phase difference pattern Δ to the object depth using Using this equation, we obtain the depth map in Fig. 3(e), which is in agreement with the geometry of the scene [inset in Fig. 3(a)].Each parameter in Eq. ( 9) was obtained in the following way: (i) Δ is obtained by computing the difference in the argument of the two reconstructed fields, as in Fig. 3(d), (ii)  ∼ 8 × 10 −2 rad is obtained by taking the ratio between  2 (0.19 m) and  1 (3.36 m), both measured with a tape measure, and finally, (iii) Δ ∼ 2 × 10 −4 rad is calculated using Eq. ( 8) where  0 is extracted from the period of the interference pattern in Fig. 3(d).We also apply a median filter to the retrieved depth map in order to reduce residual speckle noise.We briefly comment on the limitations of the depth resolution of the technique.As indicated by Eq. ( 9), the sensitivity can be improved by increasing both  and Δ.However, an excessive illumination angle  leads to shadows in the image where fringes cannot be resolved [36].Moreover, Δ cannot be increased arbitrarily since the camera pixel pitch  sets the smallest resolvable fringe shift, i.e.Δ max = /2, thus limiting the depth resolution to ∼ 2/.This technique can in principle be used to measure sub-mm depth features [38,39].

Conclusion and outlook
To summarize, we showed that randomly phase-shifted interferograms can be coherently combined into a single intensity-correlation hologram.This allowed us to extend the integration time of the hologram beyond the interferometer coherence time, thereby facilitating phase imaging in low-light and mechanically-unstable conditions.Our phase-retrieval algorithm is fast, simple, and resilient to camera readout noise.As a proof-of-principle demonstration of remote sensing, we determined the wavefront of a signal wave scattered from an object at a distance of  obj ∼ 3 m.We also used digital holographic interferometry to measure the 3D shape of an object with mm resolution.
In future work, one could extend  obj to much longer distances (e.g.hundreds of meters).Noise arising due to object movement and atmospheric turbulence can be overcome by capturing the intensity-correlation hologram on microsecond timescales using high-framerate or event-based cameras [28].The object distance could even exceed the coherence length of the laser so long as the detector timing resolution is faster than the coherence time of the laser.With an increasing distance but fixed object size, one would eventually need to mitigate diffraction limits by enlarging the telescope diameter .Moreover, the camera pixel pitch should be smaller than   / to resolve the interference pattern resulting from the highest frequency component of the imaging system.

Fig. 1 .
Fig. 1.Holography of a reflective object.(a) Experimental setup.NDF: neutral density filter, BPF: bandpass filter, BS: beam splitter, SLM: spatial light modulator.(b) Retrieved phase using the principal component analysis (PCA) algorithm described in the main text and the advanced iterative algorithm (AIA) of Ref. [31].We use  = 1000 randomly phase-shifted interferograms such as the ones shown in the column labelled "typical frame".Top (bottom) row shows results when interferograms have a high (low) SNR.(c) Distribution of the singular value magnitudes from the PCA in the low SNR case.Insets show the first four principal components, where  1 =  bg ,  2 =  cos ,  3 =  sin , and  4 is the largest noise component.

Fig. 2 .
Fig. 2. Phase error.(a) Applied  th and an example of the retrieved  exp (with  = 10, 000 and   = 1 ms) phase patterns used for the error analysis.(b) Log-log plot of the normalized mean square error (NMSE) of the retrieved phase using  frames.Each curve corresponds to a different exposure time   .(c) Typical frame for each exposure time.The SNR of the interferograms is obtained by dividing the number of signal photoelectrons by the read-noise photoelectrons.

Fig. 3 .
Fig. 3. Holography of a scattering object.(a) Experimental setup.Inset is a photo of the object seen from above (i.e.looking down the y axis).(b) and (c) Reconstructed intensity (left) and phase (right) distributions with illumination angle  1 and  2 .(d) Interference pattern  (, )(1 + cos (Δ)) (left) and phase difference map Δ (right).(e) Left shows the recovered depth map without any filtering.Right shows depth map with the colormap modulated by the intensity  (, ) and applying a median filter to reduce speckle noise.