Compressive Holographic Video

Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate $10\times$ temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.


Introduction
In recent years, people have witnessed a great interest in exploiting the redundant nature of signals. The redundancy of acquired signals provides the opportunity to sample data in a compressive approach. Candès et al. [1,2] and Donoho [3] have discussed the high probability of reconstructing signals with high fidelity from few random measurements, provided that they are sparse or compressible in a transformation basis. Since then, the theory of compressed sensing (CS) has been widely applied to computational imaging. Lustig et al. [4] described the natural fit of CS to magnetic resonance imaging (MRI). Gan [5] proposed block compressed sensing method for natural images, which is applicable for low-power, low-resolution imaging devices. Brady et al. [6] showed that holography can be viewed as a simple spatial encoder for CS and demonstrated 3D tomography from 2D holographic data.
Gabor's invention of holography in 1948 [7] has provided an effective method for recording and reconstructing a 3D light field from a captured 2D hologram. The use of a CCD camera to digitally record holographic interference patterns has made digital holography (DH) an emerging technology with a variety of imaging applications, such as particle imaging, tracking in biomedical microscopy [8][9][10][11][12] and physical process profiling and measuring [13][14][15][16][17]. Digital Gabor/in-line holography (DIH) is a simple, lensless, yet effective setup for capturing holograms. The simplicity of DIH is balanced by the requirement that objects be small enough to avoid occluding the reference beam significantly [18]. Extensive discussions and applications of DIH have been focused on microscopic imaging, i.e. small and fast-moving objects [19][20][21]. The tracking of fast movements usually entails multiple exposures [8,10,15,16,22,23]. Temporal resolution is usually limited to the 10-100 millisecond range and little research has been conducted on temporal compression. However, in recent years, CS has proved a useful tool to increase the spatial information encoded in DH [6,24]. Rivenson  . These methods have proved successful for reconstructing fast moving scenes by combining cheap low frame-rate cameras with fast spatio-temporal modulating elements. While all of these techniques enable high speed reconstruction of 2D motion, incorporating holographic capture offers the potential to extend the capabilities to 3D motion. Moreover, in many holography setups, the energy from each scene is distributed across the entire detector so that each pixel contains partial information about the entire scene. This offers the potential for improved performance relative to incoherent architectures.
Our work exploits both spatial and temporal redundancy in natural scenes and generalizes to a 4D (3D positon with time) system model. We show that by combining digital holography and coded exposure techniques using a CS framework, it is feasible to reconstruct a 4D moving scene from a single 2D hologram. We demonstrate 10× temporal super resolution and anticipate optical sectioning for 1 cm. As a test case, we focus on macroscopic scenes exhibiting fast motion of small objects (vibrating bars or small particles, etc.).

Generalized system model
Digital Gabor holography requires no separation of the reference beam and the object beam. The object is illuminated by a single beam and the portion that is not scattered by the object serves as the reference beam. This concept leads to a simple experimental setup but demands limited object sizes so that the reference beam is not excessively disturbed. In this case, the imaging process is a recording of the diffraction pattern of a 2D aperture.

Diffraction theory
We first model diffraction in a 2D aperture case. According to Fresnel-Kirchoff diffraction formula [18], the field at each observation point E(x, y; z) in a 2D aperture can be written as where r denotes the distance from (x 0 , y 0 ) at the input plane Σ 0 ,with input field E 0 (x 0 , y 0 ), to In the second line of Eq. (1) we make a further approximation of r ≈ z in the denominator, but not in the exponent. The integral then becomes a convolution, with the kernel The kernel H is also referred to as the point spread function (PSF). Since the propagation is along the z-axis, the form of the kernel is determined by the propagation distance z.

4D model
We now extend our analysis to a 4-dimensional model. As illustrated in Fig. 1, consider a 4D field V (x, y, z,t), which propagates along the positive z-direction. Along the propagation path, a high-speed coded mask M(x, y,t) is located at z 1 . A sensor is placed on the sensing plane z 2 . In one frame, the sensor captures the intensity of the field during an exposure time of ∆t. The volume can be discretized into N d planes, with the furthest plane having a distance of d n with x y 4D volume Sensing plane Coded masks projection of a 4D field at z 0 , the n-th depth plane has a distance of d n to z 0 ; M(x, y,t): temporal coded mask located at z 1 ; G(x, y, ∆t): captured image with an integral over ∆t. The sensor is located at z 2 .
respect to the observation plane at z 0 . In Gabor holography, the object beam and the reference beam overlap with each other. This requires the objects to be sparse so that the occlusion of the reference beam is negligible. Under this assumption, the field V in reality represents the summation of the object field and the constant reference field. Thus, the field at z 0 is where H d n denotes the convolutional kernel for distance d n .
At the sensing plane, during one exposure time ∆t, the sensed image can be expressed as an integral of the intensity I of the field as Equation (5) describes the continuous form of the sensing process. However, the mask is operated in a discrete form at high frame rates. Suppose that for each sensor frame the coded mask changes T times at equal intervals of τ = ∆t/T during ∆t. Then the discretized form of G is where we denote O c,i and R c,i as the transformed field at the capture plane z 2 for each time frame i.
Then the captured intensity term I can be expanded as c . (Time frame notation i is omitted here.) In [6], Brady et al. neglected the nonlinearity imposed by the squared magnitude and considered the two terms O 2 c + R 2 c (often referred to as noise and zero-order/DC term) as noise in the measurement model showing that they can be eliminated algorithmically using a CS reconstruction algorithm. In this work, we follow the same approach and the measured intensity can be expressed as where E E combines O 2 c and R 2 c into a single term considered as error. If we assume the object to be sparse, then the term O 2 c is negligible. Thus, the error term can be approximately treated as E E ≈ R 2 c . In experiment, we approximate this error term by recording the background image and subtract the scene image by this background for reconstruction. Now we assume that the sensor pixels have the same dimensions as the mask pixels, the unknown field O will have spatial dimensions N M x × N M y , depth dimension N d and temporal dimension T . Further, if we represent the convolutional operations in Eq. (6) as Toeplitz matrices, we can obtain the following compact form where notation and dimensions of the introduced variables are summarized in Table 1 and A(·) describes the complete forward model. In order to reconstruct the 4D volume, an optimization problem is formed aŝ where λ > 0 is a regularization parameter and R(·) is a regularizer on the unknown 4D field o.
In this work, we employ Total-Variation (TV) as the regularization function defined as where we note here that o is the vectorized version of the unknown 4D object field O : Eq. 10 is a generalized 4D TV regularizer. However, the choice of regularizer may vary by different purposes of reconstruction and/or properties of scenes. In experiment, a 3D TV (x, y, z) is used for resolving depths (Fig. 3). TV on temporal domain is included for recovering subtle movement (Fig. 5). Also note that independent regularization parameters may be chosen for the spatial (x, y, z) and time (t) dimensions. We used Two-step IST (TwIST) algorithm [40] for reconstruction.

Setup
Fig . 2 shows the schematic of the experimental setup. The illumination is produced by a diode laser with wavelength of 532 nm. The input beam is expanded and collimated by an ND filter and a collimating lens set (plano-convex lens, 300mm/35mm = 8.57 magnification, ND filter

Variable Description Dimensions g
Vectorization of measured intensity G from Eq. (7).
Toeplitz matrix referring to H d n from Eq. (6). Propagation and summation in depth (over N d ) for all time frames.
Toeplitz matrix referring to H z 1 −z 0 from Eq. (6). Propagation of all time frames from z 0 to z 1 .
Toeplitz matrix referring to H z 2 −z 1 from Eq. (6). Propagation of all time frames from z 1 to z 2 .
S T Matrix referring to the outer summation of Eq. (6). Summation in time (over T ).
omitted). A digital micromirror device (DMD) is used to perform pixel-wise temporal modulation of the light field. For our experiments, we used the DLP®LightCrafter 4500™from Texas Instruments Inc. The light engine includes a 0.45-inch DMD with > 1million mirrors, each 7.6 µm, arranged in 912 columns by 1140 rows in a diamond pixel array geometry [41]. The DMD is placed 70mm distance away from the objects. An objective lens (single lens) is placed in front of the CMOS sensor and well-aligned with the DMD so that it images the DMD plane onto the sensor. The lens introduces a quadratic phase factor inside the integral of Eq. 1. Thus, if the sensor is placed a distance of 2 f from the OL, the phase is the same as −2 f from the lens. In this way, T h 2 from Eq. 9 reduces to the identity matrix. We used a CMOS monochromatic sensor with a resolution of 1920 × 1200 with a pixel pitch of 5.86µm. The key factor is the synchronization between the DMD and the sensor. Each DMD pattern can be projected as fast as P T = 500µs with an effective pattern exposure of P d = 250µs. After N patterns are projected, a trigger signal is sent out to the camera which controls the shutter and results in a single exposure.

Subsampling holograms
We start our experiment by examining the reconstruction performance of subsampled holograms. Recovery of a 3D object field from a 2D hologram has been proposed in previous work [6]. The recovery can be treated as inference of high-dimensional data from undersampled measurements. Fig. 3 shows the experimental results of 3D recovery with pixel-wise subsampling. For this experiment, we captured two static hairs from craft fur (see Fig. 5a) placed a distance of 7.1 cm and 10.1 cm away from the sensor. Fig. 3 (a) shows the captured image. To preprocess the captured hologram, first we capture an image on the sensor with no object placed in the field of view -we refer to this as the background image. Note that this captured image corresponds to the term R 2 c in Eq. 7. We then subtract the hologram by the background image, down-sampled to 960 × 600 and cropped the central 285 × 285 ROI around the object. Fig. 3  (b) shows the captured image of one pattern from the DMD. Each pattern randomly selects 10% of the entire image. To avoid aliasing artifacts caused by the diamond shaped sampling patterns on the DMD, we group together 4 × 4 adjacent pixels on the DMD to make a single superpixel [41]. In our reconstructions, to form the matrix A from Eq. 8, we capture images of the mask with no object present. These captured images are divided by the background image to remove the effect of beam non-uniformity. Fig. 3 (c) shows the subsampled hologram. Fig. 3 (d) & (e) compares reconstructions for the full hologram and subsampled hologram. The image (285 × 285) was reconstructed into a 3D volume (285 × 285 × 120) with a depth range from 65 mm to 108 mm. Shown are the images reconstructed at the depth planes corresponding to the location of the two hairs. In order to quantify the performance in terms of depth resolution, we used block variance [42] for the edge pixel of the cross section by the two hairs. Higher variance infers higher contrast, and thus, higher resolution. The block variance was computed within a window of 21 × 21 pixels highlighted as blue and red in Fig. 3 (d) & (e). Fig. 3 (f) shows the normalized variance versus depth from sensor. Two principle peaks are observed and can be inferred as the focus distance for the two furs. The peak around d 1 has strong signal in all four curves. This was because the object located there has larger size than the other one. As can be seen, using only 10% of the data deteriorates both BP and CS reconstruction resolutions. And in 10% from BP, it is even harder to track the second object because of the impact of mask pattern. This can also be observed in the left panel of (e) where the back propagation of the mask severely affected the objects. The variance decreases fast in CS reconstructions. This implies the denoising effect as well as the optical sectioning power of CS. In 10% reconstruction, the intermediate volume between the two objects were not denoised as good as in "100%" case. This shows that greater subsampling factors reduce the effective depth resolution.

Temporal multiplexing
In the previous section, we analyzed the effect of subsampling on reconstruction performance for compressive holography. Here we show how to utilize the excess pixel bandwidth in the sensor to increase temporal resolution. A simulation experiment was carried out in order to quantitatively analyze our imaging system (Fig. 4). As shown in Fig. 4(a), two layers of objects (peranema with different scales) were used as a test case. Each layer had 256 × 256 pixels. The pixel pitch was set so that the whole scene size (9.85 × 9.85mm) was approximately identical to the DMD size. The first object was placed at 70mm away from the sensor. The other object was placed dz further away from the first object. dz is a changing variable. A spatiotemporal subsampling mask was displayed on the DMD. For example, when n time frames are required, each frame will have 1/n × 100% of the pixels being randomly selected and displayed. In this way, the summation of n frames is the full resolution scene image. In simulation, we omitted the propagation between the DMD and the sensor. For reconstruction, we compared back-propagation and compressed sensing. In order to have a better reconstruction result, we inserted 4 intermediate planes between the two objects. The results are shown in Fig. 4(b) and (c). In (b), the peak signal-to-noise ratio (PSNR) was used to measure the reconstruction performance. PSNR = 10log 10 (peakvalue 2 /MSE), where MSE is the mean-squared error between the reconstruction and the input object field. The PSNR is computed on the 4D volume, which can also be treated as an average over multiple time frames. The higher PSNR value is, the better fidelity the reconstruction is. We picked out a point from (b), marked as red ring, to show in (c) the visual meaning of the PSNR values. It can be seen that lower rate of subsampling causes worse reconstruction performance. PSNR also decreases with the decrease of object spacing.

Spatiotemporal recovery for fast-moving objects
We present two illustrative examples which are aimed for the application of observing subtle vibrations and tracking small-but-fast-moving particles. Fig. 5 shows a reconstruction result demonstrating a 10× increase in temporal resolution. The captured image contains several strands of hair blown by an air conditioner. From a single captured image, we reconstruct 2 depth slices and 10 frames of video. In the case of small lateral movement, i.e. vibration, it is feasible to apply total variation on time domain. For the convenience of comparison, 3 time frames (3rd, 6th, 9th) are shown for both back-propagation Fig. 5. Reconstruction results from a single image (moving hairs). 10 frames of video and two depth frames are reconstructed from a single captured hologram. Due to space constraints, 3 video frames (3rd, 6th, 9th) and two depths (d 1 = 73mm, d 2 = 111mm) are presented. (Supplemental videos included.) and compressed sensing. In terms of depth, our CS result shows well-separated objects at different depth layers while the back-propagation method fails to achieve optical sectioning. The movement of the object is also recovered in our CS result. Fig. 6 shows another reconstruction result for dropping several flakes of glitter. The glitter flakes in Fig. 6 (a) had size of 1 mm and were dropped in a range of 60 mm to 80 mm away from sensor. The glitter flakes were also blown by an air conditioner. (b) shows the captured single image. (c) shows preprocessed image which is subtracted by background image. In this case, the glitter flakes were moving at high speed. There was no overlap between two consecutive frames for the same flake. So the each frame was recovered independently. (d) shows a reconstruction map of 2 depths and 4 time frames. The downward and leftward motion of two glitter flakes can be observed. A similar refocusing method was used as in [42]. Here, we scanned the reconstructed image by a 21 × 21 window and computed the variance (normalized) to get the focused depth information. If the normalized variance at defocused depth are higher than 0.5, that pixel was rejected as background/noise. For adjacent pixels which have similar variance profile, the pixels were treated as a single particle. (e) shows the normalized variance for two particles at d 1 and d 2 . The particles are tracked at two locations pointed out by the arrows in (d). The overall tracking results are shown in Fig. 6(f) and (g). 7 particles are detected with 4D motion within 5 ms. In (f), the temporal transition was represented by 10 different color arrows. (g) shows a velocity chart of the 7 particles. The velocity of each particle was computed by v(t n ) = [d(t n+1 ) − d(t n−1 )]/2∆t, where d(t n ) depicts the 3D location at n-th time frame, ∆t = 500µs. The velocity of the particles ranges from 0.7 m/s to 5.5 m/s.

Conclusion
We have demonstrated two illustrative cases where 4D spatio-temporal data is recovered from a single captured 2D hologram. In the case of vibrating hairs, 2 depth layers and 10 video frames in time were recovered. In the case of dropping glitter flakes, a 4D volume was reconstructed to track the motion of small particles. The prototype showed that it is possible to simultaneously exceed the capture rate of imagers and recover multiple depths with reasonable depth resolution. The coded-exposure technique enables high speed imaging with a simple frame rate camera. Digital in-line holography brings the capability of 3D tomographic imaging with simple experimental setup. Our Compressive Holographic Video technique is also closely related to phase retrieval problems commonly faced in holographic microscopy. Our space-time subsampling technique can be viewed as a sequence of coded apertures applied to a spatiotemporally varying optical field. In the future we plan to explore the connections between our CS reconstruction approach and the methods introduced in [31].