Elsevier

Optics Communications

Volume 340, 1 April 2015, Pages 121-125
Optics Communications

Acceleration of color computer-generated hologram from RGB–D images using color space conversion

https://doi.org/10.1016/j.optcom.2014.11.099Get rights and content

Abstract

We report acceleration of color computer-generated holograms (CGHs) from three dimensional (3D) scenes that are expressed as RGB and depth (D) images. These images are captured by a depth camera and depth buffer of 3D graphics library. RGB and depth images preserve color and depth information of 3D scene, respectively. Then we can regard them as two-dimensional (2D) section images along the depth direction. In general, convolution-based diffraction such as the angular spectrum method is used in calculating CGHs from the 2D section images. However, it takes enormous amount of time because of multiple diffraction calculations. In this paper, we first describe “band-limited double-step Fresnel diffraction (BL-DSF)” which can accelerate the diffraction calculation than convolution-based diffraction. Next, we describe acceleration of color CGH using color space conversion. Color CGHs are generally calculated on RGB color space; however, we need to perform the same calculations for each color component repeatedly, so that computational cost of color CGH calculation is three times as that of monochrome CGH calculation. Instead, we use YCbCr color space because the 2D section images on YCbCr color space can be down-sampled without deterioration of the image quality.

Introduction

In recent years, electroholography, which displays computer-generated holograms on a spatial light modulator (SLM) has been closely studied. Electroholography is a promising technique for a three-dimensional (3D) display because it is capable of reconstructing the wavefront of a 3D scene. In state-of-the-art electroholography [1], [2], [3], [4], we need SLMs with a large number of pixels to increase the quality of the reconstruction images. Practical electroholography requires a high-resolution SLM to display a CGH because the size of the reconstructed 3D scene is proportional to the size of the CGH and the viewing angle is in inverse proportion to the pixel pitch of the CGH. Therefore, we need to calculate a large CGH by calculating the diffraction from a 3D scene, so that it takes a long time to generate such a CGH. To solve this problem, many methods for accelerating CGH generation have been proposed toward real-time calculation [5], [6], [7].

Methods of CGH calculation are roughly classified into several approaches: point light source (PLS), polygon, and RGB–D approaches. These approaches express a 3D scene in different ways. The PLS approach expresses a 3D scene as the aggregation of point light sources. To accumulate the light waves from PLSs on the CGH, the CGH is generated [5], [6]. The polygon approach expresses a 3D scene as the aggregation of polygons, and light waves from polygons are accumulated on the CGH [8], [9], [10], [11]. The RGB–D approach expresses a 3D scene as a set of RGB and depth (D) images that are readily obtained by 3D graphics libraries and RGB–D cameras [12], [13], [14], [15]. In this paper, we focus on the RGB–D method in terms of reducing the computational cost of CGH calculation.

RGB and depth images can be regarded as two-dimensional (2D) section images along the depth direction by picking up the RGB pixels with the same depth value. PLS and RGB–D representations are essentially the same; however, in this paper, we treat sectional images generated from RGB–D images. The generation of CGHs from 2D section images requires multiple diffraction calculations. Convolution-based diffraction, which is widely used in computer holography, is generally expressed as u2(x2,y2)=F1[F[u1(x1,y1)]Pz(fx,fy)], where operators F[·] and F1[·] are Fourier and inverse Fourier transform, respectively, u1(x1,y1) and u2(x2,y2) indicate source and destination planes, and Pz(fx,fy) is the transfer function of the optical system according to propagation distance z. For example, the angular spectrum method (ASM) [16] uses Pz(fx,fy)=exp(2πiz1/λ2fx2fy2). One merit of convolution-based diffraction is that the sampling rate on the destination plane is the same as that on the source plane: however, convolution-based diffraction calculation requires the expansion of 2D section images to avoid aliasing noise, so that it takes a long time to calculate CGH and requires a large memory.

This paper first describes the RGB–D approach using band-limited double step Fresnel diffraction (BL-DSF) [15], which does not need expansion unlike convolution-based diffraction. Next, we describe color space conversion [17], which is capable of reducing the calculation cost for color CGHs. In general, color CGHs are generated on RGB color space; however, to the same calculations must be performed for each color component repeatedly, so that the computational cost of color CGH calculation is three times as much as that of monochrome CGH calculation. Instead, we use YCbCr color space because the 2D section images on YCbCr color space can be down-sampled without deterioration of the image quality. Finally, we describe the combination of BL-DSF and color space conversion method in order to accelerate the RGB–D calculation CGH.

Section snippets

Band-limited double step Fresnel diffraction

For calculating color CGHs, we use band-limited DSF (BL-DSF) [15] based on double-step Fresnel (DSF) [18] diffraction. DSF is expressed by single-step Fresnel diffraction (SSF), which is a Fourier transform-based diffraction [16]. SSF is expressed as follows:u2(m2,n2)=SSFz[u1(m,n1)]=CzFFT[u1(m1,n1)exp(iπ(x12+y12)λz)],where Cz=exp(ikz)/ikz. We define the integer coordinates m1,m2[0,Nx/21] and n1,n2[0,Ny/21], the physical coordinates (x1,y1)=((m1Nx/2)px1, (n1Nx/2)py1), the sampling rates on

Color space conversion

Color CGHs are generally calculated on RGB color space, however, the same calculation needs to be performed for each color component repeatedly, so that the computational cost of the color CGH calculation is three times as much as that of the monochrome CGH calculation. One simple approach to reduce the computational cost is by down-sampling the RGB components. However, this approach results in deterioration of the image quality and blurring of the down-sampled components. So, we considered

Color CGH acceleration with RGB–D images using BL-DSF and color space conversion

This section describes the color CGH calculation from RGB–D images that are obtained by a depth camera and 3D graphics library. RGB–D consists of RGB images R(m1), G(m1), B(m1) and depth image dep(m1) of the 3D scene, respectively.

The CGH generation from the RGB–D image first picks up the pixels in the RGB image with the same depth value in the depth image. Here, the pixels in the depth image have 8 bits. After picking up the pixels, we can generate parallel planes to the CGH with the same

Results

We demonstrate the generation of CGHs using conventional and proposed methods, and we show the reconstruction images and calculation times using these methods. In optical systems, we use a time-division method that switches the RGB CGHs in time sequence on one SLM for the color reconstruction [20], [21], [22], [23]. We use the datasets for densely sampled 4D light fields provided by Wanner et al. [24] as RGB–D images as shown in Fig. 1, and we implement all the calculations by using CWO++ [25],

Conclusion

We described the acceleration of the CGH calculation using a combination of “band-limited double-step Fresnel diffraction” and color space conversion. In spite of down-sampling of the Cb and Cr components, the sharpness of the reconstructed image is well maintained. The calculation times of the proposed method are about 10-fold faster on the CPU and 3-fold faster on the GPU, compared with a conventional color CGH calculation using the ASM on the RGB color space.

Acknowledgment

This work is partially supported by JSPS KAKENHI Grant nos. 25330125 and 25240015, and the Kayamori Foundation of Information Science Advancement and Yazaki Memorial Foundation for Science and Technology.

References (25)

  • T. Shimobaba et al.

    Computational wave optics library for C++CWO++ library

    Comput. Phys. Commun.

    (2012)
  • C. Slinger et al.

    Computer generated holography as a generic display technology

    Computer

    (2005)
  • Y. Takaki et al.

    Modified resolution redistribution system for frameless hologram display module

    Opt. Express

    (2010)
  • F. Yaras et al.

    Circular holographic video display system

    Opt. Express

    (2011)
  • H. Sasaki et al.

    Image size scalable full-parallax coloured three-dimensional video by electronic holography

    Sci. Rep.

    (2014)
  • H. Yoshikawa, T. Yamaguchi, R. Kitayama, Real-time generation of full color image hologram with compact distance...
  • T. Shimobaba et al.

    Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane

    Opt. Lett.

    (2009)
  • Y. Ichihashi et al.

    Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms

    Opt. Express

    (2012)
  • K. Matsushima et al.

    Fast calculation method for optical diffraction on titled planes by use of the angular spectrum of plane waves

    J. Opt. Soc. Am. A

    (2003)
  • L. Ahrenberg et al.

    Computer generated holograms from three dimensional meshes using an analytic light transport model

    Appl. Opt.

    (2008)
  • H. Sakata et al.

    Fast computation method for a Fresnel hologram using three-dimensional affine transformations in real space

    Appl. Opt.

    (2009)
  • K. Yamamoto et al.

    Calculating the Fresnel diffraction of light from a shifted and tilted plane

    Opt. Express

    (2012)
  • View full text