Wide-viewing full-color depthmap computer-generated holograms

: An efficient synthesis algorithm for wide-viewing full-color depthmap computer-generated holograms is proposed. We develop a precise computational algorithm integrating wave-optic geometry-mapping, color-matching, and noise-filtering to multiplex multiview elementary computer-generated holograms (CGHs) into a single high-definition CGH without three-dimensional perspective distortion or color dispersion. Computational parallelism is exploited to achieve significant computational efficiency improvement in the production through-put of full-color wide-viewing angle CGHs. The proposed algorithm is verified through the full-color binary hologram reconstruction experiments utilizing an off-axis R · G · B simultaneous illumination method, which suggests the feasibility of the full-color sub-wavelength binary spatial light modulator technology.


Introduction
The technology for generating three-dimensional (3D) holographic images from computergenerated holograms (CGHs) has been extensively researched over recent decades. The key-factor for a holographic display is a spatial light modulator (SLM) with a small pixel pitch on the wavelength scale. Still, state-of-the-art SLMs have not yet reached the necessary technology level in terms of pixel pitch size and resolution. Research has been undertaken into overcoming such physical limitations in the development of holographic 3D displays through using dynamic light wave steering techniques and advancing SLM technology [1,2]. It is noteworthy that a liquid crystal-based SLM technology implementing a pixel pitch of 1um was announced recently [3].
Regarding the CGH theory, the binary modulation SLM design would be acceptable in practice. Although the binary CGH design has a physical limitation in the modulation of light field, it can generate high-quality CGH scenes, and only binary modulation in amplitude or phase may simplify the architecture design of SLM and enables the realization of SLM with sub-micro scale pixel size [4]. A few studies related to binary SLM technology have been reported. The binary SLM technologies using microelectromechanical system (MEMS) [5], phase change material [6], and related noise reduction techniques [7,8] are considerable research topics. Moreover, it is evident that advanced RGB color filters need to be applied to all small unit pixels of CGH to represent full-color holographic images under white-light illumination. The hybridization of recent advanced single-pixel structural color filter technology and spatial light modulation technology will be a promising candidate that realizes full-color CGH on the deep-subwavelength binary pattern. However, from a practical point of view, we can suppose noise filtering steps. For the parallel calculation of CGH, a segmentation or partitioning that enables independent calculation of partial CGH components is designed. In the polygon CGH design, spatial division in the object plane is used, as depicted in Fig. 1(a). The local CGH pattern of a single polygon unit is taken locally, and the total CGH pattern is synthesized by superposition of every partial CGH pattern in the total CGH plane (z=0). The object plane's spatial segmentation was proven to produce high-quality holographic 3D images and secures computational parallelism for the polygon CGH. However, it is not appropriate for wide-viewing depthmap CGH synthesis since standard depthmap data does not use object segmentation in the xy plane but z-directional sectioning. Depthmap CGH allows us to approximate photorealistic scenes using a limited number of image data depths, providing excellent flexibility for the large-scale CGHs. This paper proposes a simple and efficient viewing zone partitioning method for wide-viewing angle depthmap CGH that enables desirable, efficient parallel computation ( Fig. 1(b)). The object domain segmentation method and viewing-zone partitioning method are compared in Figs. 1(a) and 1(b). The proposed depthmap CGH computation is composed of two steps. The first step computes the (m, n) partial CGH F m,n (x 1 , y 1 ) with M × N resolution. The second collects and multiplexes all components F m,n (x 1 , y 1 ) with the corresponding carrier wave exp(j(k m,n,x x 1 + k m,n,y y 1 )) in the total CGH plane (z = 0), which is the lateral component of the carrier wave exp(j(k m,n,x x + k m,n,y y + k m,n,z z)). This step multiplies the carrier wave to the partial CGH along each direction and combines them to a single CGH plane. The resulting total CGH is represented as where the lateral k-vector (k m,n,x , k m,n,y ) is given by The carrier wave component plays the role of delivering the holographic image signal to the (m, n) th viewing zone. In the schematic of Fig. 1(b), the angular diffraction field F m,n (x 1 , y 1 ) is distributed on the (m, n) th viewing zone, where the viewer's eye is placed and perceives 3D holographic image. For each single directional elementary CGH, we use the depthmap CGH synthesis algorithm developed in our previous research [24,25]. For the computation of F m,n (x 1 , y 1 ), the complex field W m,n (x ′ 1 , y ′ 1 ) is first calculated at the x ′ 1 y ′ 1 plane orthogonal to the off-axis viewing direction. Next, the field mapping process is performed to determine the complex field distribution on the x 1 y 1 plane, F m,n (x 1 , y 1 ).

Field mapping for securing correct perspective
We use a field mapping algorithm in the spatial frequency domain for an accurate perspective representation. As illustrated in Fig. 2, the main objective of the field mapping is to ensure wave-optic perspective correction, which is similar to the role of keystone correction in typical commercial projectors. The field mapping involves a rigorous wave-optic geometry mapping for correct off-axis perspective. A simulation example in Fig. 3 compares the optical field W m,n (x ′ 1 , y ′ 1 ) in the local coordinate x ′ 1 y ′ 1 plane with the eye positions ①, ②, ③, and the corresponding global coordinate optical field F m,n (x 1 , y 1 ) at z = 0. At position ① (on-axis), there is no difference between W m,n (x ′ 1 , y ′ 1 ) and F m,n (x 1 , y 1 ). However, for other cases, the slightly diffracted and distorted field distribution F m,n (x 1 , y 1 ) provides the rectangular image W m,n (x ′ 1 , y ′ 1 ) at the local coordinate x ′ 1 y ′ 1 plane. The viewer at off-axis positions ② and ③ sees W m,n (x ′ 1 , y ′ 1 ) at the x ′ 1 y ′ 1 plane, and the corresponding optical field Q m,n (x ′ 2 , y ′ 2 ) is formed at the viewer's retina plane. It is noteworthy that, even for the same W m,n (x ′ 1 , y ′ 1 ), F m,n (x 1 , y 1 ) changes according to the viewer's exact off-axis position. Unfolding the theory, we use the spatial frequency vector, (α m,n , β m,n , γ m,n ), which is defined by (α m,n , β m,n , γ m,n ) = (1/2π)(k m,n,x , k m,n,y , k m,n,z ) = (1/λ)(cos ϕ m,n sin θ m,n , sin ϕ m,n sin θ m,n , cos θ m,n ).
(3) In the free space of the local coordinate system, W m,n (x ′ 1 , y ′ 1 , z ′ 1 ) is represented by the angular spectrum integral, sin θm,n(cos φm,n(α+αm,n)+sin φm,n(β+βm,n)) γ(α+αm,n,β+βm,n) whereĀ m,n (α, β) is the angular spectrum of the low-bandwidth holographic image (for the complete derivation, see Appendix). The light field distribution F m,n (x 1 , y 1 ) in the global coordinate system can be extracted from W m,n (x 1 , y 1 , 0), which is the global coordinate version of W m,n (x ′ 1 , y ′ 1 , z ′ 1 ). It can be decomposed by the carrier wave term e j2π(α m,n x 1 +β m,n y 1 ) and the low-bandwidth holographic image signal F m,n (x 1 , y 1 ). F m,n (x 1 , y 1 ) is interpreted as the lowbandwidth holographic image delivered to the (m,n)th viewing zone by the carrier frequency e j2π(α m,n x 1 +β m,n y 1 ) .
The important thing is that in the depthmap CGH computation, to achieve computational efficiency, we decompose Eq. (5) into the analytic carrier wave (for which dense sampling is needed) and the numerical partial CGH field (low-resolution). In the numerical computation of the total CGH of Eq. (1), the CGH component W m,n (x 1 , y 1 ) should be sufficiently oversampled to secure the wave feature e j2π(α m,n x 1 +β m,n y 1 ) . For example, the resolution of W m,n (x 1 , y 1 ) should be 30001×30001 for a corresponding F m,n (x 1 , y 1 ) with resolution of just 2001×2001.

Full-color depthmap computer-generated holograms
The advantage of the depthmap CGH is that computational complexity is object complexity invariant, so it has no limitations for representing complicated or photorealistic scenes. Thus, for depthmap CGHs, parallel computation is well defined, and enables the computational acceleration easily in the depthmap CGH.
As illustrated in Figs. 4(a) and (b), the depthmap CGH is calculated using several layers of the 3D object (L 1 -L n ) on the regular intervals of the optical axis. The input data of L1 ∼ Ln is composed of the intensity image and depth data of the target object. Using the ICdFr algorithm [24], the depth-map CGH, F m,n (x 1 , y 1 ), is obtained and is multiplexed to the total CGH by the form W m,n (x 1 , y 1 ) (Eq. (1)). Because wide-viewing angle CGHs should allow observation of the target object even on the off-axis, the object input data for each direction should be a sectionized image of the layered object as shown in Fig. 4(b). As represented in Fig. 4(c), a depthmap model of the target object is constructed for a specific viewing direction. Since the retinal plane's computational grid is independent of color in the depth-map CGH method, the algorithm allows the color matching of the R/G/B CGH components without an additional compensation algorithm [24]. The total CGH of Eq. (1) provides the viewer a wide-viewing angle 3D object from various directions with continuous parallax and full accommodation effect as shown in Fig. 4(d). Figure 5 presents the simulation result of the wide-viewing single color depthmap CGH, which shows that five different perspective views are reconstructed according to the viewing zone's observation position. The size of the CGH is about 2 cm by 2 cm, the pixel pitch is 0.425um, and the wavelength is 488 nm blue. The simulation result confirms that the target object is well represented in various directions, and that the motion parallax phenomenon is well represented. Regarding RGB color matching, two core algorithms have been developed to produce correctly-RGB-matched wide-viewing angle CGH 3D images. The primary consideration is that viewing zone size varies with RGB wavelengths, as indicated in Fig. 6(a). The blue wavelength, which is the shortest, generates a slightly smaller rectangular viewing zone than the green and red wavelengths. Therefore, if each RGB viewing zone is divided into 27×27, the red, green, and blue subdivision units don't match each other. For example, a distorted image is observed because the (1,1) viewing window has a different position for each color as seen in Fig. 6. Figure 7 shows the field distribution of the eye lens plane and the retinal plane reconstructed image for the traditional full-color CGH calculated for R, G, and B wavelengths without any special compensating technique. Using the conventional method, the size of the signal field distribution on the eye lens plane (viewing zone) is expressed differently for R, G, and B wavelengths.
As a result, as shown in Fig. 7(d), on the full-color eye lens plane, the subdivision units of each color do not correspond well, resulting in a color mismatch problem. Reconstructed images showing the presentation of this phenomenon on the retinal plane are given in Fig. 7(e)-(g). At the center of the eye lens plane, each color's signals are matched, resulting in accurate full-color results even in the reconstructed image. On the other hand, on the periphery, the field distributions  of each color are poorly matched, so color mismatch problem appears as indicated by the red circles on the reconstructed images.
On the other hand, to accurately represent full-color in the eye lens plane, it is necessary to accord the size of the blue image, which has the shortest wavelength, so as to match the subdivision unit of each color. The main idea of the color matching method presented in Fig. 6(b) is simple; the size of the eye lens plane is set to the same size as the blue zone for all colors, and the RGB partial CGHs are calculated at the blue wavelength subdivision viewing zones, as shown in Figs. 8(a)-(d). Then, the color mismatch phenomenon does not appear in the full-color viewing window, as shown in Figs. 8(d)-(g). Using the color matching method, the signal distributions of R, G, and B wavelengths on the eye lens plane of the full-color CGH are expressed as in the same size, so the subdivision unit signals match at all positions on the eye lens plane. Therefore, as shown in Figs. 8(e)-(f), accurate full-color objects can be expressed at side positions in addition to the center position of the viewing window. In Figs. 7 and 8, full complex CGH is used to make it easier to understand the color matching method.

Noise filter for improving image quality
Even though the total viewing zone area is fitted to the viewing zone of the shorted blue wavelength, each elementary viewing zone size is different for R, G, and B wavelengths, as shown in Fig. 9(a). In the depth map CGH synthesis, the R and G subdivision components overlap their neighbor subdivision units. In practice, this overlap induces an erroneous holographic image when the viewer's eye is placed on the overlapping area in the viewing zone. Thus, filtering the R and G components to trim the over-sized parts and fit them into the blue viewing zone's exact size is necessary. Let us refer to this filtering technique by B-zone filtering. Figure 9 compares the holographic image synthesis results both with and without B-zone filtering of R and G components. It is seen that double image deterioration is observed in the former case, while after filtering, in the latter, the double image deterioration is removed. Regarding computational efficiency, the filtering and noise reduction process is the most time-consuming. Common sense suggests that the computational burden is mainly imposed by the CGH pattern calculation. However, our observation is that in practice more effort to accelerate the filtering process should be made. The filtering process is heavy in terms of computation because M × N individual CGHs are located in the CGH plane, and, to use the filtering method, they should be sent to the eye-lens plane through the inverse Fresnel transform [23]. The field distribution to which the filtering method has been applied should then be sent back to the CGH plane through the forward Fresnel transform. In this process, each CGH filtering process is performed for W m,n (x 1 , y 1 ) with the full-resolution of the total CGH included, making the calculation intensive. This issue will be further improved by continuing research. To sum up, the CGH synthesis algorithm has three core elements: (i) angular multiplexing of viewing zone subdivision CGH components, (ii) viewing zone design, and (iii) filtering of the R and G components, in which the viewing zone and subdivision areas of the R and G components are tailored to fit to the B component dimension. As a result, the viewing zone and viewing angle of the CGH are constraint by the B-viewing zone.

Experimental results
For the experimental verification of the proposed CGH design method, we adopt the RGB off-axis illumination setup illustrated in Fig. 10. The CGH pattern used in this test is a binary pattern fabricated in the form of a binary mask pattern [21]. The binary design generates DC and conjugate noises in the viewing zone. The signal, DC, and conjugate component distribution of the RGB wavelengths are formed in the viewing zone, as shown in Fig. 10. Three R·G·B laser beams with their own oblique incidence angles illuminate the CGH leading to the spatial shift of the diffraction fields in the eye lens plane. The red laser beam with a downward obliqueness adjusts the trigonal pyramid to the center of the eye-lens plane, the green beam with on-axis non-obliqueness tunes the green sphere, and the blue beam with an upward obliqueness locates the lower blue rectangular parallelepiped to the exact center of the eye-lens plane. Under simultaneous illumination of the three R·G·B lasers, the viewer can see the red pyramid, green sphere, and blue parallelepiped simultaneously through the overlapped viewing zone of the three R·G·B viewing zones, which is referred to as the region of interest (ROI) (Fig. 10). This means that an accurate full-color holographic image can be observed in the ROI. Roughly the ROI region is about 1/3 the area of the total viewing zone. The precise alignment of RGB beams and optimal RGB CGH component design is necessary. This method has the disadvantage of using about 1/3 of the viewing zone in the vertical direction, but all of the horizontal direction's viewing angles are maintained. In Fig. 11, the experimental setup for observing the wide-viewing full-color CGH is presented. R(638 m), G(532 nm), and B(488 nm) laser beams are collimated by the 300 mm focal length lens to form an expanded plane wave. Each collimated wave is tuned by mirrors to impinge on the binary CGH sample with the specific oblique incidence angle according to wavelength. Here, the R·G·B beam incident angles on the CGH are set to 9°, 0°, and 6°, respectively. The perspective view of the CGH is measured by changing the viewing angle of the charge coupled device (CCD) camera mounted on the rotation stage. The viewing distance of the CCD from the CGH is set to 1 m. The implemented experimental setup is shown in Fig. 11(b).  Fig. 12. We see that the CGH design method is capable of representing a holographic 3D image with a wide-viewing angle and accurate colors. The parameters applied to the simulation and experiment are summarized in Table 1.

Conclusion
In conclusion, we proposed the design method for wide-viewing full-color depthmap CGHs based on the viewing-zone partitioning and verified its feasibility experimentally. The design process consists of the multi-steps of field-mapping, color-matching, noise filtering, and integrable multiplexing of partial CGHs into a single high-definition CGH. It has been shown that this stepwise process produces a wide-viewing angle full-color CGH pattern without 3D perspective distortion and color dispersion. We believe the static CGH experiment in this paper supports the feasibility and future research direction of wide-viewing angle full-color CGH using the binary SLM technology with the RGB simultaneous-illumination.