Compressive multiple view projection incoherent holography

Multiple view projection holography is a method to obtain a digital hologram by recording different views of a 3D scene with a conventional digital camera. Those views are digitally manipulated in order to create the digital hologram. The method requires a simple setup and operates under white light illuminating conditions. The multiple views are often generated by a camera translation, which usually involves a scanning effort. In this work we apply a compressive sensing approach to the multiple view projection holography acquisition process and demonstrate that the 3D scene can be accurately reconstructed from the highly subsampled generated Fourier hologram. It is also shown that the compressive sensing approach, combined with an appropriate system model, yields improved sectioning of the planes of different depths. ©2011 Optical Society of America OCIS codes: (070.0070) Fourier optics and signal processing; (090.1995) Digital holography; (100.3190) Inverse problems; (110.1758) Computational imaging; (100.6890) Threedimensional image processing; (100.6950) Tomographic image processing. References and links 1. J. W. Goodman, Introduction to Fourier optics, 3rd Ed., (Roberts and Company Publishers, 2005). 2. N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpointprojection based methods,” Appl. Opt. 48(34), H120–H136 (2009). 3. Y. Li, D. Abookasis, and J. Rosen, “Computer-generated holograms of three-dimensional realistic objects recorded without wave interference,” Appl. Opt. 40(17), 2864–2870 (2001). 4. D. Abookasis, and J. Rosen, “Computer-generated holograms of three-dimensional objects synthesized from their multiple angular viewpoints,” J. Opt. Soc. Am. A 20(8), 1537–1545 (2003). 5. Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional fourier spectra of real existing objects,” Opt. Lett. 28(24), 2518–2520 (2003). 6. B. Katz, N. T. Shaked, and J. Rosen, “Synthesizing computer generated holograms with reduced number of perspective projections,” Opt. Express 15(20), 13250–13255 (2007). 7. N. T. Shaked, and J. Rosen, “Modified Fresnel computer-generated hologram directly recorded by multipleviewpoint projections,” Appl. Opt. 47(19), D21–D27 (2008). 8. N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express 15(9), 5754–5760 (2007). 9. N. Chen, J.-H. Park, and N. Kim, “Parameter analysis of integral Fourier hologram and its resolution enhancement,” Opt. Express 18(3), 2152–2167 (2010). 10. G. Indebetouw, P. Klysubun, T. Kim, and T.-C. Poon, “Imaging properties of scanning holographic microscopy,” J. Opt. Soc. Am. A 17(3), 380–390 (2000). 11. J. Rosen, and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). 12. Y. Rivenson, and A. Stern, “Compressed imaging with separable sensing operator,” IEEE Signal Process. Lett. 16(6), 449–452 (2009). 13. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). 14. E. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). 15. http://sites.google.com/site/igorcarron2/compressedsensinghardware. 16. A. Stern, “Compressed imaging system with linear sensors,” Opt. Lett. 32(21), 3077–3079 (2007). 17. M. Lustig, “Sparse MRI,” Ph.D. dissertation, Dept. Elect. Eng., Stanford Univ., Palo Alto, CA, 2008. 18. S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express 17(26), 23920–23946 (2009). #140581 $15.00 USD Received 4 Jan 2011; revised 22 Feb 2011; accepted 27 Feb 2011; published 17 Mar 2011 (C) 2011 OSA 28 March 2011 / Vol. 19, No. 7 / OPTICS EXPRESS 6109 19. A. Bourquard, F. Aguet, and M. Unser, “Optical imaging using binary sensors,” Opt. Express 18(5), 4876–4888 (2010). 20. Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express 18(14), 15094–15103 (2010). 21. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel Holography,” Disp. Tech, Journalism 506–509(10), 6 (2010). 22. Y. Rivenson, A. Stern, and J. Rosen, “Compressive Sensing Approach for Reducing the Number of Exposures in Multiple View Projection Holography,” in Frontiers in Optics, OSA Technical Digest (CD) (Optical Society of America, 2010), paper FThM2. 23. J. M. Bioucas-Dias, and M. A. T. Figueiredo, “A new twist: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16(12), 2992–3004 (2007). 24. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17(15), 13040–13049 (2009). 25. C. F. Cull, D. A. Wikner, J. N. Mait, M. Mattheiss, and D. J. Brady, “Millimeter-wave compressive holography,” Appl. Opt. 49(19), E67–E82 (2010). 26. K. Choi, R. Horisaki, J. Hahn, S. Lim, D. L. Marks, T. J. Schulz, and D. J. Brady, “Compressive holography of diffuse objects,” Appl. Opt. 49(34), H1–H10 (2010). 27. X. Zhang, and E. Y. Lam, “Edge-preserving sectional image reconstruction in optical scanning holography,” J. Opt. Soc. Am. A 27(7), 1630–1637 (2010).

i.

By taking a Fourier transform on ( )
, h m n , we will get a reconstruction which corresponds only to 0 z = plane of the scene. ii.
In general, to obtain a reconstruction corresponding to i z , we should multiply the hologram by a quadratic phase function. Viz.
4 where x N and y N are the number of pixels in the x and y directions respectively.
For simplicity, x y N N N = = .
(2) can be rewritten as a matrix-vector multiplication form as following: where i u is a 2  It is noted that many conventional data compression method such as wavelet compression cannot satisfy all the above constraints at the same time.
Compared to wavelet compression, Compressed Sensing (CS) can reduce energy consumption while achieving competitive data compression ratio.
However, current CS algorithms only work well for sparse signals or signals with sparse representation coefficients in some transformed domains (e.g., the wavelet domain).
Since EEG is neither sparse in the original time domain nor sparse in transformed domains, current CS algorithms cannot achieve good recovery quality.
This study proposes using Block Sparse Bayesian Learning (BSBL) [1] to compress/recover EEG. The BSBL framework was initially proposed for signals with block structure. This study explores the feasibility of using the BSBL technique for EEG, which is an example of a signal without distinct block structure.

II. COMPRESSED SENSING AND BLOCK SPARSE BAYESIAN LEARNING
CS is a new data compression paradigm, in which a signal of length N, denoted by is compressed by a full row-rank random matrix, denoted by where y is the compressed data, and Φ is called the sensing matrix. CS algorithms use the compressed data y and the sensing matrix Φ to recover the original signal x. Their successes rely on the key assumption that most entries of the signal x are zero (i.e., x is sparse).
When this assumption does not hold, one can seek a dictionary matrix, denoted by so that x can be expressed as x = Dz and z is sparse.
When CS is used in a telemonitoring system, signals are compressed on sensors according to (1). This compression stage consumes on-chip energy of the WBAN. The signals are recovered by a remote computer according to (2). This stage does not consume any energy of the WBAN.
Despite of many advantages, the use of CS in telemonitoring is only limited to a few types of signals, because most physiological signals like EEG are not sparse in the time domain and not sparse enough in transformed domains.
The issue now can be solved by the BSBL framework [1], [2]. 3 It assumes the signal x can be partitioned into a concatenation of non-overlapping blocks, and a few of blocks are non-zero. Thus, it requires users to define the block partition of x.
However, it turns out that user-defined block partition does not need to be consistent with the true block partition [3]. Further, in this work, they found even if a signal has no distinct block structure, the BSBL framework is still effective.
This makes feasible using BSBL for the CS of EEG, since EEG has arbitrary waveforms and the representation coefficients z generally lack block structure.
Currently, there are three algorithms in the BSBL framework. In this experiment, they chose a bound-optimization based algorithm, denoted by BSBL-BO. Details on the algorithm and the BSBL framework can be found in [1].

III. EXPERIMENTS OF COMPRESSED SENSING OF EEG
The following experiments compared BSBL-BO with some representative CS algorithms in terms of recovery quality.
Two performance indexes were used to measure recovery quality. One was the Normalized Mean Square Error (NMSE). The second was the Structural SIMilarity index (SSIM) [4]. SSIM measures the similarity between the recovered signal and the original signal, which is a better performance index than the NMSE for structured signals.
In the first experiment D was an inverse Discrete Cosine Transform (DCT) matrix, and thus z In both experiments the sensing matrices Φ were sparse binary matrices, in which every column contained 15 entries equal to 1 with random locations while other entries were zeros.

A. Experiment 1: Compressed Sensing with DCT
This example used a common dataset ('eeglab data.set') in the EEGLab. This dataset contains EEG signals of 32 channels with sequence length of 30720 data points, and each channel signal contains 80 epochs each containing 384 points.
To compress the signals epoch by epoch, they used a 192 ×384 sparse binary matrix as the sensing matrix Φ , and a 384×384 inverse DCT matrix as the dictionary matrix D. 4 Two representative CS algorithms were compared in this experiment. One was the Model-CoSaMP, which has high performance for signals with known block structure. The second was an L1 algorithm (CVX toolbox) to recover EEG.  For each event, they calculated the ERP by averaging the associated 250 recovered epochs.
5 Figure 3 (a) shows the ERP for the 'left direction' and the ERP for the 'right direction' averaged from the dataset recovered by the L1 algorithm. Figure 3 (b) shows the ERPs from the recovered dataset by BSBL-BO. Figure 3 (c) shows the ERPs from the original dataset.
Clearly, the resulting ERPs by the L1 algorithm were noisy.

The ERPs averaged from the recovered by BSBL-BO maintained all the details of the original
ERPs with high consistency. The SSIM and the NMSE of the resulting ERPs by the L1 algorithm were 0.92 and 0.044, respectively. In contrast, the SSIM and the NMSE of the resulting ERPs by BSBL-BO were 0.97 and 0.008, respectively.

IV. DISCUSSIONS
Using various dictionary matrices, the representation coefficients of EEG signals are still not sparse. Therefore, current CS algorithms have poor performance, and their recovery quality is not suitable for many clinical applications and cognitive neuroscience studies.
Instead of seeking optimal dictionary matrices, this study proposed a method using general dictionary matrices achieving sufficient recovery quality for typical cognitive neuroscience studies.
The empirical results suggest that when using the BSBL framework for EEG compression/recovery, the seeking of optimal dictionary matrices is not very crucial. Compressed sensing can be applied for two main purposes: -i) it can lower the amount of data needed and thus allows to speed up acquisition. -An example in the field of medical imaging of such application is dynamic MRI [4]. -ii) it can improve the reconstruction of signals/images in fields where constraints or the physical acquisition set up yields very sparse data sets. -A typical example is seismic data recovery in geophysics [5].
The objective of this paper is -to give the reader an overview of the different attempts to show the feasibility of CS in medical ultrasound.
The classification of the studies is done according to the data that are considered to be sparse.
-the scatterer distribution itself, the pre-beamforming channel data, the beamformed RF signal and even Dopple data.
The way that how to be sparse in some domains is a key idea to apply the inverse problem to the CS problem.
In this paper, they show several schemes that expand data into sparse signals.

INFONET, GIST / 15
A central concern in CS is that the data under consideration should have sparse expansion in some dictionaries.
-Fourier basis, wavelet basis, dictionary learned from data, etc... -i.e., the number of non-zero coefficients of the image or signal in this representation basis should be as small as possible.
One of the main features of the existing studies is the type of signal/image to be reconstructed and the choice of the representation where the US data are assumed to be sparse.
We overview the following models in the sparse domains However, considering that most of the scatters have an echogenecity close to zero is more unusual.
The basic idea [12-13] is to write the direct scattering problem and solve the inverse problem under the constraint that the scatter distribution is sparse.
-With the scattered pressure received by the transducer elements after transmission of a plane wave in direction , represents propagation and interaction with the scatters and is the scatter distribution lying on a regular grid.
If is assumed to be sparse, this problem is equivalent to the CS problem. With the same assumption [14,15] proposed another approach based on finite rate of innovation and Xampling.
The edges are well reconstructed but the speckle is close to be completely lost in some parts of the images.
This is consistent assumption of scatter map sparsity.
INFONET, GIST / 15 Another group of authors [21-24] consider that the raw channel data gathered at each transducer element during receive have a sparse decomposition in some basis.
The objective of such an approach is to reduce the quantity of prebeamformed data acquired and evaluate the ability of this approach to reconstruct B-mode images of good quality.  ) (x-axes). -Generally, mutual information is directly proportional to accuracy • -While the classification accuracy of a given method is high, we observe a low mutual information content. In a combination with EEG, they find that NIRS is capable of enhancing event-related desynchronization (ERD)-based BCI performance significantly.

Behaviors of HbO and HbR
- Evaluation of hybrid NIRS-EEG brain computer interface -Hybrid measurement is helpful in classification of SMR based BCI -Classification of NIRS response is not higher than EEG based BCI -Long time delay of the hemodynamic response lead to lower information transfer rate -There is no benefit with hybrid NIRS-EEG based BCI yet Probable research direction -Hybrid NIRS-EEG based BCI with real time classification -Zero training classifier and adaptive calibration with real time experiment