Correlation Plenoptic Imaging between Arbitrary Planes

We propose a novel method to perform plenoptic imaging at the diffraction limit by measuring second-order correlations of light between two reference planes, arbitrarily chosen, within the tridimensional scene of interest. We show that for both chaotic light and entangled-photon illumination, the protocol enables to change the focused planes, in post-processing, and to achieve an unprecedented combination of image resolution and depth of field. In particular, the depth of field results larger by a factor 3 with respect to previous correlation plenoptic imaging protocols, and by an order of magnitude with respect to standard imaging, while the resolution is kept at the diffraction limit. The results lead the way towards the development of compact designs for correlation plenoptic imaging devices based on chaotic light, as well as high-SNR plenoptic imaging devices based on entangled photon illumination, thus contributing to make correlation plenoptic imaging effectively competitive with commercial plenoptic devices.


Introduction
Plenoptic imaging (PI) is a recently established optical imaging technique that allows to collect the light field, namely, the composite information on spatial distribution and direction of light coming from the scene of interest [1,2]. The reconstruction of light paths can be used, in post-processing, to refocus out-of-focus planes, change the point of view and extend the depth of field (DOF) within the three-dimensional scene of interest. PI is also one of the simplest and fastest methods to obtain three-dimensional images with the current technology [3][4][5][6][7][8][9]. In state-of-the-art plenoptic cameras, the composite information of spatial distribution and direction of light is collected by means of a microlens array; this imposes a significant resolution loss, well below the diffraction limit defined by the numerical aperture (NA) of the camera lens [10][11][12]. Attempts to weaken the resolution vs. DOF trade-off have been made by using signal processing and deconvolution [3,5,[13][14][15], and other algorithms and analysis tools [7,16].
In this perspective, we have recently proposed a fundamentally different approach, named correlation plenoptic imaging (CPI), where spatio-temporal correlation properties of light are exploited to physically decouple the image formation from the retrieval of the propagation direction of light, which are registered by two disjoint sensors [17]. As a consequence, no microlens array is required and diffraction-limited resolution can be recovered. CPI has been proposed for both chaotic light [17] and entangled photons illumination [18], and several alternative configurations [19][20][21] have been considered. The first experimental demonstration of CPI has been performed with chaotic light [22], and the analysis of the signal-to-noise ratio in specific cases [23,24] have been performed.
The common feature of most plenoptic imaging protocols so far explored is the fact that directional information is retrieved by imaging two specific planes: one arbitrarily chosen within the 3D scene of interest, and one coinciding with either the focusing element or any other lens within the device.1. However, to focus composite lenses, such as camera lenses or microscope ⨂ correlator objectives, is not trivial and imposes the identification of correction factors to be introduced within the refocusing algorithm to account for the uncertainty about the effective distance between the two reference planes.
In this paper, we demonstrate that this difficulty can be overcome by performing plenoptic imaging starting from the acquisition of diffraction-limited images of two generic planes typically chosen within the tridimensional scene of interest. The core of this proposal, which we shall name correlation plenoptic imaging between arbitrary planes (CPI-AP), is to employ correlated light, such as chaotic light or entangled photons, and to measure correlations between two disjoint sensors, placed in the conjugate planes of the two arbitrarily chosen planes. Besides highly simplifying the experimental implementation and improving the precision of refocusing, the proposed protocol has the further advantage of relieving the resolution versus DOF compromise, so as to reach an unprecedented combination of these two parameters. As we shall discuss, the area between the two chosen planes is also very interesting in terms of achievable resolution; it is thus intriguing to have the opportunity to choose the distance between the arbitrary planes based on both the extension of the sample and the required resolution of the overall 3D scene.

CPI-AP with chaotic light illumination
Let us start by analyzing the CPI-AP protocol in the case the illuminating light is emitted by a chaotic source. A schematic representation is reported in Fig. 1. Light from the object passes through the lens L f , of focal length f , and is separated by a beam splitter (BS) in two beams, each one detected by a different spatially-resolving sensor, D a and D b . The detectors are placed in the conjugate planes of two planes arbitrarily chosen in the surrounding of the object, indicated with D o a and D o b , respectively, and by the same color as their conjugate sensors; if z a and z b are the distances between the lens L f and the two sensors D a and D b , respectively, the thin-lens define the distances z a and z b of the conjugate planes of the detectors, D o a and D o b , from L f . As we shall demonstrate, plenoptic information is contained in the spatio-temporal correlations characterizing the intensity fluctuations retrieved by the two sensors.
To simplify the computation, we shall consider a planar object, placed at a distance z from L f , whose emission is characterized by the light intensity profile A(ρ o ) and by negligible transverse coherence. Though, for definiteness, the object will be treated as an emitter of chaotic light, the working principle remains unchanged in case it either reflects, transmits or scatters chaotic light. As mentioned above, CPI-AP is based on the measurement of equaltime correlations between the light intensities measured at the points of planar coordinates ρ a , on detector D a , and ρ b , on detector D b . More specifically, the relevant information that enables plenoptic imaging is contained in the correlation between intensity fluctuations . Under the assumption that the source is ergodic [26], the ensemble average appearing in Γ(ρ a , ρ b ) can be approximated by a time average, in line with the experimental procedure [22]. In addition, we will consider quasi-monochromatic light of central wavelength λ and wavenumber k = 2π/λ, and propagate light from a generic object point ρ o to a detector point ρ a (ρ b ) by means of the corresponding paraxial optical transfer functions [27]. We thus obtain the correlation function, which reads, up to irrelevant factors, with P(ρ ) the lens pupil function and M j = −z j /z j the magnifications of planes D o j on sensors D j , with j = a, b.
In order to develop a feeling about the result of Eq. (2), we represent, in the upper left panel of Fig. 2, a numerical evaluation of the correlation function obtained by considering: the object as an emitter of chaotic light of wavelength λ = 480 nm, consisting in a double-slit mask with center-to-center distance d = 200 µm and width d/2, placed in z m = (z a + z b )/2 = 290 mm (i.e., at equal distances from the planes D o a and D o b ); the lens L f with focal length f = 58 mm and numerical aperture NA = 0.08, as seen from the object plane; the distance between the two imaged planes as |z b − z a | = 10 mm. Notice that z a and z b has been chosen in such a way that the images separately retrieved by detectors D a and D b are outside the depth of field of lens L f , and are thus out-of-focus. The density plot of the correlation function reported in the top left panel of Figure 2 shows that the information about the double-slit mask is contained along the bisector of the plane x a , x b . As we shall better see later, this is a consequence of the "one-to-one" correspondence between the points (ρ a , ρ b ) of the correlation function, and the rays captured by L f that cross both the plane D o a , in ρ a /M a , and the plane D o b , in ρ b /M b . To clarify these concepts and the imaging properties of the correlation function Γ(ρ a , ρ b ), we shall consider the geometrical-optics limit k → ∞, in which the most relevant contribution to the integral in Eq. (2) can be evaluated by applying the method of stationary phase [28,29]. Actually, the stationary points of the phase k(φ a − φ b ), appearing in Eq. (2), with respect to ρ , ρ and ρ o enable us to determine the geometrical correspondence between points on the object and points on the sensors D a and D b , providing the the dominant asymptotic contribution to the correlation This result shows that, independent of the distance z of the object mask from the lens L f , in the geometrical limit, the correlation of intensity fluctuations encodes an image of both the (squared) object intensity profile A 2 and the lens pupil function P. The dependence of A 2 on both detectors coordinate explains the behaviour of the correlation function observed in in the top left panel of Figure 2, as already discussed. The image of the object depends only on the coordinate of one detector, either ρ a or ρ b , only if the object mask lies in either one of the planes D o a or D o b , respectively. For z = z a (z = z b ), A 2 does not depend any longer on ρ b (ρ a ), and the integration of the correlation function on ρ b (ρ a ) gives a focused image of the object: with M a (M b ) the transverse magnification. By working in the wave optics regime, one would find that this image has the same point-spread function and depth of field as the corresponding conventional image retrived by sensor D a (D b ) alone. However, in the more general case in which the object does not lie in either one of the conjugate planes of the detectors and is outside the DOF, as reported in the upper-left panel of Fig. 2, the integral of the correlation function Γ on either one of the detector coordinates gives rise to blurred images. This is shown in the bottom-left panel of the same figure, where integration of Eq. (2) (or, equivalently, of Eq. (4)) on x b gives rise to a blurred image of the double-slit.
In order to decouple the image of the object from the image of the lens, thus obtaining a "refocusing" algorithm, we shall define proper linear combinations of the detector coordinates ρ a and ρ b , such as the two variables By inverting the transformation in Eq. (6), we obtain the refocused correlation function: with From the last line of Eq. (6), it is evident that the performed linear transformation of the argument of Γ realigns all the displaced images corresponding to different values of ρ b . This is clearly visible in the upper-right panel of Fig. 2, where we report the refocused correlation function obtained by "reordering" the correlation function of the upper-left panel according to the change of variable of Eq. (7). As shown in the bottom-left panel of Fig. 2, no blurring occurs anymore upon integrating the refocused correlation function over the variable ρ s ; in fact, this integral gives the final refocused image Although any combination ρ s gives a focused image of the object, the integration over ρ s reported in Eq. (9) enables to exploit the whole signal collected by the two detectors, hence, to considerably increase the signal-to-noise ratio of the final image. A further simulation has been performed by considering a three-dimensional object such a depth of field target tilted by 3.8 • and characterized by equal line thickness and spacing between centers of black and white lines of 200 µm. Chaotic light is simulated by randomly illuminating the object with light of transverse coherence length ∼ 100 µm, and the correlation function Γ is reconstructed, by averaging products of intensity fluctuations over 30k frames; the results are reported in Fig. 3, where refocusing has been obtained through Eqs. (7)- (9). Stacking of all refocused images is also reported therein, thus demonstrating the DOF enhancement enabled by CPI-AP.

CPI-AP with entangled photon illumination
Entangled photons represent the most paradigmatic case of correlated light beams. In the mid nineties, entangled photons produced by spontaneous parametric down-conversion (SPDC) [26] opened the way to quantum imaging [30]. Despite being less practical to produce than light from chaotic sources, there is evidence that entangled light can provide otherwise inaccessible noise reduction effects [31,32].
The proposed CPI-AP protocol adapted to entangled photons illumination is pictured in Fig. 4. The "signal" (s) and "idler" (i) entangled photon pairs are emitted by SPDC along two different directions, and impinge on lens L 1 , of focal length f 1 , which collimates the incoming beams. Only one of the entangled photon beams (the one propagating along path b) illuminates the object of interest: we shall identify with D g a the specific plane of the object laying in the focal plane of lens L 1 . A pair of identical lenses L 2 , placed at a distance f 1 + z a from L 1 , enable to reproduce the ghost image of D g a on detector D a , by means of correlation measurements with detector D b . The lenses L 2 , of focal lengths f 2 , serve to image on detectors D a and D b the planes D o a and D o b , respectively, placed at a distance z a and z b from L 2 . In fact, the distances z a and z b between L 2 and the detectors, along path a and b, respectively, satisfy the thin lens equations: Hence, similar to the previous case, the planes D g a and D o b , are the "arbitrary planes" chosen within the three-dimensional scene. As we shall prove below, the coincidence counting of photon pairs detected by the two sensors D a and D b enable plenoptic imaging of the object of interest.
Coincidence counting is formally described by the Glauber correlation function [33] where E (±) a,b are the positive-and negative-frequency contributions to the electric field component that propagate towards D a and D b , evaluated at equal times, and is the quasi-monochromatic (with wavelength λ) biphoton state. Here, the creation operators a † k i a † k s generate a pair of photons with wavevectors k s (signal) and k i (idler) from the vacuum |0 . Both wavevectors have modulus |k s | = | k i | = k = 2π/λ, and the variables κ s , κ i represent the transverse momentum components with respect to the propagation directions of signal and idler, respectively. The function h tr appearing in Eq. (12) is related by Fourier transform to the amplitude profile F of the laser pump on the crystal: F(ρ) = ∫ d 2 κ e iκ ·ρ h tr (κ). Notice that, though we will show the results for beams of equal central wavelength, the working principle is unchanged in case signal and idler have different wavelengths. Now, we compute the optical propagators of the transverse momentum components towards ρ j along the path j, in the hypothesis that both the aperture of lens L 1 and the pump laser beam have infinite extension, and consider a planar object with amplitude transmission profile A(ρ o ) placed at an arbitrary distance z before the lens L 2 . The correlation function reads, up to irrelevant constants, where with M j = −z j /z j the magnifications provided by the lenses L 2 , on the two paths j = a, b. Different from the chaotic light case, where the correlation function of Eq. (2) depends on the object (squared) intensity profile, the G (2) function associated with entangled photon CPI-AP depends on the transmission amplitude of the object; this indicates that the retrieved plenoptic (ghost) images are coherent rather than incoherent. This difference is a consequence of the coherent nature of entangled photons, as well as of the fact that the object is not in the common path of entangled photons pairs, but is illuminated by only one of the entangled beams. To interpret the above result, we shall notice that the term containing the quadratic phases in the lens coordinate ρ 2 indicates two focusing conditions, one for each path. In path b, the focusing condition reads z = z b (i.e., z satisfies the thin-lens equation (10) with i = b), indicating that the object is directly imaged on the plane of detector D b . In path a, the focusing condition z = z a is less intuitive, since there is no object placed in this path; in fact, this condition corresponds to the situation where the "ghost image" of the object placed at a distance z = z a from the lens L 2 , in arm b, is focused on the plane of detector D a by means of coincidence counting between D a and D b [18,30]. Such focused ghost image is characterized by a positive magnification −M a = z a /z a : The minus sign is due to the double inversion of the ghost image on from D g a to D a . In order to show the working principle of CPI-AP in the case of entangled photons, we report in the top-left panel of Fig. 5 a numerical calculation of G (2) from Equation (13), in the case of photon wavelength λ = 710 nm, and considering an imaging lenses L 2 with numerical aperture ⨂ coincidences  Fig. 2, also in this case, the correlation function of CPI-AP contains information about the double slit mask, but the images are not oriented along either one of the axes. Due to the positive magnification M a , the ghost image of the sample is not inverted, and the coherent images in Figure 5 are inverted compared to the incoherent images in Figure 2. In close analogy with Eq. 5, if the object is placed in one of the reference plane at a distance z = z a (z = z b ), the integral of the correlation function over D b (D a ) yields the ghost image of the plane D g a (the conventional image of the plane D o b ). In the more general case considered in Fig. 5, both these images are heavily blurred.
By applying the stationary-phase condition to k(ψ a + ψ b ), we find that the refocused image can be obtained by re-expressing the correlation function in terms of the two variables Figure 6. Comparison of the resolution versus depth of field tradeoff in standard imaging (first two panels), CPI-AP, and previous CPI schemes. The comparison in made by plotting the visibilities of a double-slit mask with center-to-center distance d and slit width d/2, obtained by the considered imaging methods, as a function of both the resolution d and the axial coordinate z, directly related with the image DOF. The position of the mid-point z m = (z a + z b )/2 between the two focused image planes in the CPI-AP protocol is taken as a reference. The first and second panels from the left refer to the standard images read on the detectors D a and D b , respectively. The third panel contains the visibility of the refocused CPI-AP image. In the fourth panel, we report for comparison the visibility obtained with one of the previously developed CPI schemes, in which one of the reference planes coincides with the focusing element and the other one is placed at a distance z m from it.
with M the image magnification, and comes out to depend on both the slit distance d, which we shall use to define the image resolution, and the longitudinal mask position z, giving information on the depth of field.
The first and second panels of Fig. 6 report the visibility of the images directly retrieved by D a and D b , as described by Eq. (5). The third panel shows the visibility of the CPI-AP refocused image as given by Eq. (9). It is evident that the region of high visibility extends well beyond the superposition of the high-visibility regions in the first two panels. The resolution of CPI-AP is maximal in the reference planes z = z a and z = z b , where the refocused image coincides with the conventional images described by Eq. (5).
In Fig. 6, the slit distance d 52 µm is the best resolution that can be achieved by refocusing objects placed in the mid-point z = z m = (z a + z b )/2; here, the visibility of the refocused image is V 0.1. The CPI-AP protocol enables refocusing objects of this size within a range ∆z CPI−AP 14.17 mm, which is more than 10 times larger than the range where the same double slit can be resolved by conventional imaging, since Σ a (first panel) and Σ b (second panel) are characterized by ∆z a 1.33 mm and ∆z b 1.38 mm, respectively. The slight oscillations observed in the high-visibility region of the refocused image originate from the intrinsically coherent-imaging nature of CPI [see Eqs.
(2)- (13)]. Similar results can be obtained in the case of CPI-AP with entangled photons, with the only difference that the image Σ a corresponds to a ghost image.
The fourth panel of Fig. 6 enables to extend the comparison of the resolution vs DOF tradeoff of CPI-AP with the one characterizing previous CPI schemes [17][18][19][20]. The visibility plot in the rightmost panel is obtained by considering a CPI system with the same numerical aperture as the CPI-AP protocol, but with one reference plane chosen close to the object and the second one coinciding with the focusing element. In the case of Fig. 6, the DOF of CPI-AP at d 18 µm is improved by approximately a factor 3 with respect to previous CPI schemes. Thus, the availability of two reference planes enables both to obtain two high-resolution images within the scene of interest and, most important, to further improve the maximum achievable DOF.

Conclusions and outlook
The improved DOF vs resolution tradeoff of CPI-AP is certainly the most striking peculiarity of this novel protocol. Another relevant advantage compared to the previously proposed CPI schemes is the fact of not requiring sharp focusing of either the light sources, as in Refs. [17,18] or lenses, as in Refs. [19,20], a task that is not simple to implement and manage. In fact, this difference significantly simplifies both the experimental implementation and the data analysis, and does not require the use of planar sources.
Furthermore, for the chaotic-light based setup, the light propagation along two almost identical optical paths provides the possibility to exploit in the most efficient way (and without adding artificial intensity balancing or amplifications), the dynamic range of the camera, as required when the two detectors are implemented by using disjoint parts of the same sensor [34].
In view of future developments, the main perspective for the configuration with chaotic light is to develop a compact CPI camera, capable of enhancing the performances of current digital cameras. As for the CPI-AP protocol with entangled photon illumination, the most interesting perspective is to employ it for signal-to-noise ratio optimization: the system actually shares many features with the configuration used to obtain sub-shot-noise quantum imaging [31,32,35], and a preliminary and encouraging analysis of the noise reduction factor has been performed in Ref. [36] by considering a setup analogous to the one presented here. The choice of the optimal measurement protocol to enable plenoptic sub-shot-noise imaging is still an open problem which we shall address in future works.