Digital slicing of 3 D scenes by Fourier filtering of integral images

We present a novel technique to extract depth information from 3D scenes recorded using an Integral Imaging system. The technique exploits the periodic structure of the recorded integral image to implement a Fourier-domain filtering algorithm. A proper projection of the filtered integral image permits reconstruction of different planes that constitute the 3D scene. The main feature of our method is that the Fourier-domain filtering allows the reduction of out-of-focus information, providing the InI system with real optical sectioning capacity. © 2008 Optical Society of America OCIS codes: (100.6890) Three-dimensional image processing, (110.4190) Multiple imaging, (110.6880) Three-dimensional image acquisition, (150.5670) Range finding. References and Links 1. M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. (Paris) 7, 821-825 (1908). 2. H. E. Ives, “Optical properties of a Lippman lenticulated sheet,” J. Opt. Soc. Am. 21, 171-176 (1931). 3. C. B. Burckhardt, “Optimum parameters and resolution limitation of Integral Photography,” J. Opt. Soc. Am. 58, 71-76 (1968). 4. T. Okoshi, “Three-dimensional displays,” Proc. IEEE 68, 548-564 (1980). 5. F. Okano, H. Hoshino, J. Arai, and I. Yayuma, “Real time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598-1603 (1997). 6. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034-2045 (1998). 7. H. Arimoto and B. Javidi, “Integral 3D imaging with digital reconstruction,” Opt. Lett. 26, 157-159 (2001). 8. J.-S Jang. and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324-326 (2002). 9. S. Jung, J.-H. Park, H. Choi, and B. Lee, "Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement," Opt. Express 12, 1346-1356 (2003). 10. J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000scanning-line video system,” Appl. Opt. 45, 1704-1712 (2006). 11. J.-S. Jang and B. Javidi, "Large depth-of-focus time-multiplexed three-dimensional integral imaging by use of lenslets with nonuniform focal lengths and aperture sizes," Opt. Lett. 28, 1924-1926 (2003). 12. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5237-5242 (2004). 13. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, "Extended depth-of-field 3-D display and visualization by combination of amplitude-modulated microlenses and deconvolution tools," J. Disp. Technol. 1, 321-327 (2005). 14. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, "Depth-enhanced three-dimensional two-dimensional convertible display based on modified integral imaging," Opt. Lett. 29, 2734-2736 (2004). 15. H. Choi, S.-W. Min, S. Yung, J.-H. Park, and B. Lee, “Multiple viewing zone integral image using dynamic barrier array fro three-dimensional displays,” Opt. Express 11, 927-932 (2003). 16. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewingangle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15, 16255-16260 (2007). #98546 $15.00 USD Received 8 Jul 2008; revised 11 Sep 2008; accepted 11 Sep 2008; published 13 Oct 2008 (C) 2008 OSA 27 October 2008 / Vol. 16, No. 22 / OPTICS EXPRESS 17154 17. J.-S Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324-326 (2002). 18. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597-603 (2005). 19. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, "Depth-enhanced three-dimensional two-dimensional convertible display based on modified integral imaging," Opt. Lett. 29, 2734-2736 (2004). 20. W. Matusik, and H. Pfister. “3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes.” ACM Trans. Graph. 23, 814-824 (2004). 21. H. Liao, S. Nakajima, M. Iwahara, N. Hata, and S. I. y T. Dohi, “Real-time 3D image-guided navigation system based on integral videography,” Proc. SPIE 4615, 36-44 (2002). 22. Y. Frauel and B. Javidi, "Digital Three-Dimensional Image Correlation by Use of Computer-Reconstructed Integral Imaging," Appl. Opt. 41, 5488-5496 (2002). 23. S. Yeom and B. Javidi, "Three-dimensional distortion-tolerant object recognition using integral imaging," Opt. Express 12, 5795-5809 (2004). 24. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483-491 (2004). 25. C. Wu, A. Aggoun, M. McCormick, and S. Y. Kung, “Depth measurement from integral images through viewpoint image extraction and a modified multibaseline disparity analysis algorithm,” J. Electron. Imaging 14, 023018 (2005). 26. J.-S. Jang and B. Javidi, "Three-dimensional synthetic aperture integral imaging," Opt. Lett. 27, 1144-1146 (2002). 27. S.-H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express 14, 1208512095 (2006). 28. B. Javidi, R. Ponce-Díaz, and S.-H. Hong, "Three-dimensional recognition of occluded objects by using computational integral imaging," Opt. Lett. 31, 1106-1108 (2006). 29. T. Wilson, ed. Confocal Microscopy (Academic, London 1990). 30. R. Martínez-Cuenca, A. Pons, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted integral image display,” Opt. Express 14, 9657-9663 (2006).


Introduction
Integral Imaging (InI) is a 3D imaging technique that is based on the principle of Integral Photography (IP) [1][2][3][4].The IP uses a microlens array (MLA) to record a collection of 2D elemental images onto a photographic plate.Since each of these images conveys a different perspective of the 3D scene, the system is capable of acquiring 3D depth information.We refer to the complete set of elemental images as the integral image.When the integral image is imaged through a MLA the rays of light draw the same directions as in the pickup stage.Any observer in front of the MLA sees a 3D image of the original scene without the need of any special glasses.Furthermore, this image can be observed from certain range of angles.It was not until the late 20th century that the principle of IP actually attracted the attention of researchers in 3D Television and Imaging [5][6][7][8][9].The InI systems have been developed thanks to the advances in the fabrication of lenticular systems and the rising resolution of digital devices used for the pickup or for the reconstruction stage [10].In the past few years, the research efforts have been addressed to improve the performance of InI.In this sense, the researchers strive to increase the depth of field [11][12][13]14], the viewing angle [15,16] and the quality of the displayed images [17,18].There have been remarkable practical advances by designing 2D-3D convertible displays [19] and multiview video architecture and rendering [20][21].Amongst the new applications of the InI, the reconstruction of the original 3D scenes from the corresponding integral images is especially interesting.In fact, although the reconstruction algorithms are still far from a true profilometric technique, the reconstructed images allow the visualization of partially occluded objects [22][23][24][25] as well as the recognition of 3D objects [26][27][28].
The concept of optical sectioning was minted in the field of optical microscopy to refer to the capacity of providing sharp images of the sections of a 3D sample [29].In scanning microscopes the optical sectioning is obtained with a pinhole that rejects signals scattered from out-of-focus sections.Here we implement this concept by means of a comb filtering in the integral-image spectrum.Thus, we extract depth information of the 3D scene with real optical sectioning.

Fourier filtering of integral images
Let us start by analyzing the pickup stage of an InI system in the simple case in which a 2D object is set parallel to the MLA at a distance z s .Assuming that paraxial approximation holds, is clear that each microlens provides a scaled image of the object, and therefore the integral image is composed by a set of equally spaced replicas of the object.In Fig. 1 we have drawn a scheme of the pickup.For the sake of simplicity, the scheme and also the following equations have been described in one dimension.The extension to 2D is straightforward.We assume that the MLA has an odd number of microlenses, x N , and that the central microlens is aligned with the optical axis of the pickup system.We label this lens as L 0 and the other microlenses as L m , m being the integer lens index ranging from ( ) is the scale factor between the object and the image plane.The pickup period, S T , is the distance between replicas of S , and depends on the MLA pitch, p , through The periodic structure of the integral image is the key in our Fourier-filtering procedure.Note that if the pickup is performed with optical barriers [30], only the microlenses whose index verify the condition , are able to record the replica of S .This constraint limits the maximum, max m , and the minimum, min m , index of lenses that record the image of S. The number microlenses that are contributing to the integral image is then Thus, the integral image of a plane object centered at S can be calculated, as ( ) where ⊗ denotes the convolution product, ( ) , and ( ) O x is the object intensity distribution.The Fourier transform of the expression above is in which symbol ~ denotes the Fourier transformation.In the general case of 3D scenes in which occlusions are not too severe the integral image results from the incoherent superposition of many periodical signals, each one corresponding to a different depth plane in the object space.In such case, the spectrum of the integral image can be understood as a superposition of comb functions with different periods, each one being related to a specific depth in the object space.Consequently, the spectrum of the integral image can be calculated as This particular structure of the spectrum of the integral image allows the use of Fourierfiltering tools to discriminate the spectral components corresponding to a particular depth.The filtering corresponding to a depth position z R can be written as where the frequency filter is simply the comb function The inverse Fourier transform of the filtered image provides a new integral image which only includes the information corresponding to the selected depth.We have illustrated the filtering process in Fig. 2. Due to pixilated nature of the sensors, each depth section consists of an array of sinc functions in the Fourier domain.Since this function does not fall sharply to zero, the signals generated by objects at different depths cannot be perfectly discriminated.

Volumetric reconstruction of Fourier filtered integral image
The volumetric reconstruction using back projection technique described in [24] can be applied on the filtered integral image to reconstruct an arbitrary plane parallel to the MLA.In this approach, each elemental image is inversely back projected on the desired hypothetical reconstruction plane through its unique associated pinhole.The collection of all the back projected elemental images are then superimposed computationally to achieve the intensity dis-tribution on the desired reconstruction plane.The intensity of each point is determined by averaging the intensity information carried by all rays intersecting on the reconstruction plane.It should be noted that the number of rays conveying information about each object point might vary from point to point depending on the field of view of each elemental image.For instance, in Fig. 3, point R 1 lies within the field of view of 7 elemental images, whereas the intensity information about R 2 is only carried by 6 rays.This difference must be taken into account in the averaging process.Note, besides, that ray cones emanated from a single object point at the reconstruction plane would recombine again by the back projection method to accurately recreate the intensity of the object point.However, rays emanated from the object points away from the reconstruction plane would mix with their neighbors resulting in a defocused effect.Thus, with computational reconstruction one is able to get a focus image of an object at the correct reconstruction distance.The rest of the scene appears blurred.( ) ( ) in which K denote the number of elemental images acquired.Besides, R compensates for intensity variation due to different distances from the object plane to elemental image ( ) k O x on the sensor and is given by: However, for most cases of interest where the sensor size is smaller than the reconstruction distance, Eq. ( 9) is dominated by the term (z s +g) 2 and can be assumed to be constant for a particular reconstruction distance.
If the computational reconstruction using back projection is applied to the filtered integral image, a true optical sectioning becomes possible.This is due to the fact that, after filtering, the objects that are away from the reconstruction plane are filtered in each elemental image, whereas the objects at the reconstructed plane remain as sharp images.

Experimental results
To show the feasibility of our method, we have conducted optical experiments to obtain an integral image of a 3D scene consisting of two toy cars located at different distances from the MLA, as depicted in Fig. 4. In the experiment, the images were recorded using a square MLA composed of 41x27 lenslets with focal length .
The cars labeled with 6 and 2 were located approximately 70mm and 90mm away from the MLA.In Fig. 5(a) we show the subset of 1x3 elemental images obtained with the 3 central microlenses.Note that, from this perspective, there is significant occlusion of the blue car by the red car.Fig. 5(b) shows the periodic structure of the spectrum of the integral image (the spectrum is not strictly periodic since it is modulated by a sinc function, which is the Fourier transform a the pixel shape).Note the spreading of each order in this figure due to the presence of signals at different depths.In Fig. 5(c) we mark the filtering positions corresponding to two different planes.In Fig. 7 we show a set of reconstructed images at different depths.Apparently, the proposed procedure has permitted to focus the two cars separately.In the movie we see the result of the reconstruction over the filtered planes.

Conclusions.
We have presented an alternative computational reconstruction method for integral imaging using Fourier filtering.The technique is simply based on the fact that an each point in the object space is imaged as a set of replicas with a given spatial period that is dependent on the distance of the object point.By performing a volumetric reconstruction from the filtered integral image, the system is capable of performing optical sectioning since the signals that are out of focus appear blurred twice.Experimental results are presented to demonstrate the feasibility of the technique in terms of providing sharp slices of the reconstructed scene.

Fig. 1 .
Fig. 1.Each microlens is labeled by its position in the array .The origin for the indexes is the center of the central microlens.The images of a point source through the microlenses are depicted.

.
As shown in Fig 1, the integral image of a point object placed at ( ) S S , x z is com- posed of a series of replicas positioned at:

Fig. 2 .
Fig. 2. The Fourier transform of an integral image.Left side illustrates the signals that correspond to two sources at different depths, namely S and S'.On the right, we show the performance of the filtering.Only signals with pitch close to T R pass though the filter.

Fig. 3 .
Fig. 3.The volumetric reconstruction calculates the reconstructed field by projecting the integral image through the pinhole array.Optical barriers are also simulated to avoid overlapping.Let the filtered k-th elemental image be denoted by

Fig. 4 .
Fig. 4. The 3D scene was composed by two toy cars.The cars were about 20x10 mm in size.

Fig. 5 .Fig. 6 .
Fig. 5. a) Set of 1x3 elemental images of the recorded integral image.b) Central part of its spectrum.c) The filtering with comb functions of different period permits to discriminate the information at a given depth in the 3D scene.We show two different filtering pitchs, in red red for z R =70 mm and in blue for z R =120 mm..After performing the Fourier filtering we obtained a stack of 35 filtered integral images, ranging from 50 mm to 120 mm in steps of 2 mm.In Fig.6we show two subset of elemental images from filtered stacks at z R =70 mm and z R =92 mm.Each filtered integral image is then used as an input for the subsequent volumetric reconstruction.This allows slice by slice reconstruction of the 3D scene showing optical sectioning effects.