Accelerated synthesis of wide-viewing angle polygon computer-generated holograms using the interocular affine similarity of three-dimensional scenes

The interocular affine similarity of three-dimensional scenes is investigated and a novel accelerated reconfiguration algorithm for intermediate-view polygon computergenerated holograms based on interocular affine similarity is proposed. We demonstrate by using the numerical simulations of full-color polygon computer-generation holograms that the proposed intermediate view reconfiguration algorithm is particularly useful for the computation of wide-viewing angle polygon computer-generated holograms. © 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (200.4560) Optical data processing; (090.1995) Digital holography; (090.2870) Holographic display. References and links 1. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues,” Appl. Opt. 50(34), H87–H115 (2011). 2. J.-H. Park, “Recent progress in computer-generated holography for three-dimensional scenes,” J. Inform. Displ. 18(1), 1–12 (2017). 3. S.-C. Kim, J.-M. Kim, and E.-S. Kim, “Effective memory reduction of the novel look-up table with onedimensional sub-principle fringe patterns in computer-generated holograms,” Opt. Express 20(11), 12021–12034 (2012). 4. H. Kang, E. Stoykova, and H. Yoshikawa, “Fast phase-added stereogram algorithm for generation of photorealistic 3D content,” Appl. Opt. 55(3), A135–A143 (2016). 5. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, “Rapid calculation algorithm of Fresnel computergenerated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display,” Opt. Express 18(19), 19504–19509 (2010). 6. T. Shimobaba and T. Ito, “Fast generation of computer-generated holograms using wavelet shrinkage,” Opt. Express 25(1), 77–87 (2017). 7. S. Jiao, Z. Zhuang, and W. Zou, “Fast computer generated hologram calculation with a mini look-up table incorporated with radial symmetric interpolation,” Opt. Express 25(1), 112–123 (2017). 8. A. Symeonidou, D. Blinder, and P. Schelkens, “Colour computer-generated holography for point clouds utilizing the Phong illumination model,” Opt. Express 26(8), 10282–10298 (2018). 9. T. Ichikawa, K. Yamaguchi, and Y. Sakamoto, “Realistic expression for full-parallax computer-generated holograms with the ray-tracing method,” Appl. Opt. 52(1), A201–A209 (2013). 10. T. Ichikawa, T. Yoneyama, and Y. Sakamoto, “CGH calculation with the ray tracing method for the Fourier transform optical system,” Opt. Express 21(26), 32019–32031 (2013). 11. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). 12. Y. Zhao, L. Cao, H. Zhang, D. Kong, and G. Jin, “Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method,” Opt. Express 23(20), 25440–25449 (2015). 13. J. Roh, K. Kim, E. Moon, S. Kim, B. Yang, J. Hahn, and H. Kim, “Full-color holographic projection display system featuring an achromatic Fourier filter,” Opt. Express 25(13), 14774–14782 (2017). 14. T. Senoh, Y. Ichihashi, R. Oi, H. Sasaki, and K. Yamamoto, “Study on a holographic TV system based on multiview images and depth maps,” Proc. SPIE 8644, 86440A (2013). 15. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008). 16. D. Im, E. Moon, Y. Park, D. Lee, J. Hahn, and H. Kim, “Phase-regularized polygon computer-generated holograms,” Opt. Lett. 39(12), 3642–3645 (2014). Vol. 26, No. 13 | 25 Jun 2018 | OPTICS EXPRESS 16853 #330470 https://doi.org/10.1364/OE.26.016853 Journal © 2018 Received 30 Apr 2018; revised 30 May 2018; accepted 2 Jun 2018; published 15 Jun 2018 17. S.-B. Ko and J.-H. Park, “Speckle reduction using angular spectrum interleaving for triangular mesh based computer generated hologram,” Opt. Express 25(24), 29788–29797 (2017). 18. K. Matsushima, “Computer-generated holograms for electro-holography,” Appl. Opt. 44, 4607–4614 (2005). 19. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48(34), H54–H63 (2009). 20. Y. Tsuchiyama and K. Matsushima, “Full-color large-scaled computer-generated holograms using RGB color filters,” Opt. Express 25(3), 2016–2030 (2017). 21. K. Matsushima and N. Sonobe, “Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects,” Appl. Opt. 57(1), A150–A156 (2018). 22. J.-H. Park, S.-B. Kim, H.-J. Yeom, H.-J. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim, and S.-B. Ko, “Continuous shading and its fast update in fully analytic triangular-mesh-based computer generated hologram,” Opt. Express 23(26), 33893–33901 (2015). 23. W. Lee, D. Im, J. Paek, J. Hahn, and H. Kim, “Semi-analytic texturing algorithm for polygon computergenerated holograms,” Opt. Express 22(25), 31180–31191 (2014). 24. L. Ahrenberg, P. Benzie, M. Magnor, and J. Watson, “Computer generated holography using parallel commodity graphics hardware,” Opt. Express 14(17), 7636–7641 (2006). 25. G. Li, K. Hong, J. Yeom, N. Chen, J.-H. Park, N. Kim, and B. Lee, “Acceleration method for computergenerated spherical hologram calculation of real objects using graphics processing unit,” Chin. Opt. Lett. 12(6), 060016 (2014). 26. T. Shimobaba, T. Ito, N. Masuda, Y. Ichihashi, and N. Takada, “Fast calculation of computer-generatedhologram on AMD HD5000 series GPU and OpenCL,” Opt. Express 18(10), 9955–9960 (2010). 27. Y.-H. Seo, H.-J. Choi, J.-S. Yoo, and D.-W. Kim, “Cell-based hardware architecture for full-parallel generation algorithm of digital holograms,” Opt. Express 19(9), 8750–8761 (2011). 28. N. Takada, T. Shimobaba, H. Nakayama, A. Shiraki, N. Okada, M. Oikawa, N. Masuda, and T. Ito, “Fast highresolution computer-generated hologram computation using multiple graphics processing unit cluster system,” Appl. Opt. 51(30), 7303–7307 (2012). 29. J. Cho, J. Hahn, and H. Kim, “Fast reconfiguration algorithm of computer generated holograms for adaptive view direction change in holographic three-dimensional display,” Opt. Express 20(27), 28282–28291 (2012). 30. D. Im, J. Cho, J. Hahn, B. Lee, and H. Kim, “Accelerated synthesis algorithm of polygon computer-generated holograms,” Opt. Express 23(3), 2863–2871 (2015). 31. Y. Pan, Y. Wang, J. Liu, X. Li, and J. Jia, “Fast polygon-based method for calculating computer-generated holograms in three-dimensional display,” Appl. Opt. 52(1), A290–A299 (2013). 32. A. W. Lohman, R. G. Dorsch, D. Mendlovic, Z. Zalevsky, and C. Ferreira, “Space-bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13(3), 470–473 (1996). 33. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16(16), 12372–12386 (2008). 34. H. Kim, J. Hahn, and B. Lee, “Image volume analysis of omnidirectional parallax regular-polyhedron threedimensional displays,” Opt. Express 17(8), 6389–6396 (2009). 35. Y. Lim, K. Hong, H. Kim, H.-E. Kim, E.-Y. Chang, S. Lee, T. Kim, J. Nam, H.-G. Choo, J. Kim, and J. Hahn, “360-degree tabletop electronic holographic display,” Opt. Express 24(22), 24999–25009 (2016). 36. T. Inoue and Y. Takaki, “Table screen 360-degree holographic display using circular viewing-zone scanning,” Opt. Express 23(5), 6533–6542 (2015). 37. Y. Takaki and S. Uchida, “Table screen 360-degree three-dimensional display using a small array of high-speed projectors,” Opt. Express 20(8), 8848–8861 (2012). 38. T. Kakue, T. Nishitsuji, T. Kawashima, K. Suzuki, T. Shimobaba, and T. Ito, “Aerial projection of threedimensional motion pictures by electro-holography and parabolic mirrors,” Sci. Rep. 5(1), 11750 (2015).


Introduction
Three-dimensional (3D) imaging and display technologies have been in active development over the past two decades.The basic principle of 3D display technologies [1] is the utilization of binocular 3D cues for the human visual perception system, with interocular disparity being the most effective of these cues.In a classical sense, the interocular disparity supposes that the parallax views of a 3D scene are considered completely different.
In general, holographic 3D displays are considered the ultimate form of 3D display because they are able to deliver the most natural 3D images with accommodation-vergence match [2].This accommodation-vergence match is ascribed to the interocular disparity included in the CGH pattern.The computer-generated holograms (CGHs) for holographic 3D displays contain all of the information on the continuous parallax views of a threedimensional (3D) scene, which is recorded in a CGH using single two-dimensional continuous complex fringe patterns and produces motion parallax effect as well as accommodation-vergence match.
From an information theory perspective of CGH, we need to introduce the concept contrast to the interocular disparity, interocular similarity, where the different directional views of a 3D scene share a strong similarity.Interocular similarity is worth analyzing in depth since it gives new insight on the information of CGH and its utilization enables the acceleration of CGH.If we can exploit the interocular similarity of a 3D scene with finite viewing angle to synthesize intermediate view CGHs and its total calculation amount is reduced, it would be expected that we have a mathematical complexity reduction for the acceleration of CGH calculation.In this context, interocular similarity leads to the expectation that continuous parallax views share informational similarity and the actual informational capacity of a CGH can be smaller than the informational capacity of the conventional spacebandwidth product [32,33].With this in mind, we can extend this to understand that the space-bandwidth product is the upper-bound of the amount of information that is containable in a finite-viewing-angle 3D image, because the conventional space-bandwidth product assumes that there is no relationship between adjacent views.
This fundamental information theoretic perspective on CGH is the motivation of this paper with the primary questions being how we can efficiently use the interocular similarity of 3D objects to develop an accelerated algorithm for CGH synthesis and how interocular similarity can be represented efficiently.This paper presents a theoretical analysis of the interocular similarity among adjacent holographic images with angular separation.The interocular similarity between adjacent views can be represented by the affine transform of corresponding points and this property is extensively investigated and extended to efficiently synthesize wide-view polygon CGHs.An application of the proposed method to 360-degree multi-view CGH content generation [34][35][36][37][38] is presented.
This paper is structured as follows.In Section 2, a geometric model of 3D scene perception is described.In Section 3, the affine transform analysis of the interocular similarity of a 3D scene is presented.In Section 4, an accelerated CGH algorithm based on the interocular similarity is proposed based on the wave optic interpretation with affine transformation for CGH calculation.Numerical experiments and the subsequent evaluation of the proposed accelerated CGH algorithm are presented with an example of 360-degree multiview CGH content generation.Finally, concluding remarks are provided in Section 5.

Geometric model of three-dimensional scene perception
In this section, we present a geometric model of 3D scene perception and analyze the interocular similarity of a 3D scene.The focus of the analysis is the non-linear relationship between two different parallax views in retina spaces derived from the 3D scene perception model.The non-linear relationship can be linearly approximated by an affine transformation even for quite large angular separation between two views, a process referred to the interocular affine similarity transform.The tolerance range for the interocular affine similarity is numerically analyzed using the interocular affine similarity transform.The interocular affine similarity established in this section is then applied to the accelerated CGH synthesis algorithm in Section 3.
A basic property of the visual perception system is that the monocular imaging system of the eye allows the viewer to see 3D objects by automatically adjusting its accommodation to a convergence point.In Fig. 1, two monocular imaging systems that share a convergence point are illustrated, with the global reference coordinate system and the local coordinate systems for the left and right eyes denoted as xyz , 1 1 1 u v w , and 2 2 2 u v w , respectively.When both eyes are focused on the convergence point, then the foci of two eyes are automatically adjusted to the convergence point.The perceived image in the eye varies with changes in eye position.Here, we develop a geometric model of this monocular imaging based on an arbitrary location and rotation.
in the global coordinate system, where the projection center N is the center of the eye lens.In normal conditions, the unit vector u is on the viewing plane, which is the plane specified by the u and w vectors, and the unit vector v is normal to the viewing plane.
The optic axis vector of the eye in the global coordinate system is given by . The coordinates of the projection center N is then solved by 0 0 0 cos sin sin sin , where the necessary parameters, t , θ , and θ are given by ( ) ( ) ( ) The v-axis unit vector of the retina coordinate system v is obtained by ( ) cos cos sin sin cos , sin cos sin cos cos ,sin sin , where τ is the tilt angle of the eye that is adjusted to make u parallel to the xy plane.
Because the eye's focus remains on the convergence point, when the convergence point moves, the focal length of the eye varies in accordance with where e d is the distance between the eye lens and the retina plane, and t is the distance between the convergence point and the eye plane form Eq. ( 2).In addition, in order to consistently describe the wave optic imaging and CGH synthesis theory using the same framework, we need to define an adaptive global coordinate system for the eye, as seen in Fig. 1(b).In the adaptive global coordinate system of the eye x y z ′ ′ ′ , the 3D scene is rotated relatively to align the optic axis of the eye with the global coordinate z-axis.The optic axis z′ is matched to the optic axis w and the plane x y ′ ′ is parallel to the plane uv .As a result, the adaptive global coordinate systems are obtained, respectively, as where the rotational matrix and the projection center, N , in the virtual global coordinate system, are set as The imaging of a 3D object in the retina space of the observer's eye is interpreted by the adaptive coordinate system, which is schematically illustrated in Fig. 2. For simplicity, from this point forward, the notation xyz will be used instead of x y z ′ ′ ′ to represent the adaptive global coordinate system.Consider the imaging of a 3D object through a single eye illustrated in Fig. 2, where the triangular facet in object space is imaged in the retina space of the viewer's eye.The center of mass of the triangular facet is denoted by x y z and the eye focus is set to the center of mass.A triangular facet in the object space is delivered to the retinal space through the geometric imaging transformation [16].The focal length of the eye lens f is set by ( ) where the parameter t is obtained by substituting Eq. ( 14) into Eq.( 12): cos sin sin sin cos .
cos sin sin sin cos In a similar manner, ( ) , , u v w is solved as ( ) where the parameter s is obtained by substituting Eq. ( 16) into Eq.( 13): Each triangular facet has its own local coordinate system with the origin set to the center of mass denoted by x y z ′ ′ ′ and u v w ′ ′ ′ in Fig. 2(b) [15].The local coordinates of a point in object space is solved for the global coordinates cos cos cos sin sin sin cos 0 .
sin cos sin sin cos The local coordinate of the corresponding point in the retina space is given by cos cos cos sin sin sin cos 0 .
sin cos sin sin cos Here, ( ) , , x y z and ( ) , , u v w have the corresponding local coordinates ( ) x y z and ( ) u v w are the centers of mass of the triangles 1 2 3 P P P and 1 2 3 r r r P P P , respectively.The local coordinates of the object space is functionally related to those of retina space in the form of ( ) ( ) cos cos cos sin sin 0 sin cos 0 0 , sin cos sin sin cos where t is given by Eq. ( 15) and ( ) , , u v w is represented by Inversely, (u ′ where s is Eq   17) and ( , 21), ( 22), of object space ) and ( erception of a 3 oned slightly ap his scene.
. shows that the perspectives of both eyes differ for the same scene.In the second row of the chart, we can see that the uniform grid on the local coordinates of object space is non-linearly mapped to that of each retina space.The two non-linear grids also exhibit different patterns because the location and view direction of both eyes are inconsistent.
It is important to consider the coordinate transform of a point in the local coordinate system of a facet to the local coordinate system of the adaptive global coordinate system.This relationship is described by The full derivation of Eq. ( 24) with the definitions of n GtoL , R , and LtoG , are described in the Appendix.From Eq. ( 24), the redefined grid ( ) on the adaptive local coordinate is solved for the uniform grid ( ) of the original local coordinates.

Interocular affine similarity of a three-dimensional scene
If a triangular facet has a plain texture, the observer will notice variation in its shape and shading in response to spatial changes in the observer's location and view direction.However, for a textured triangular facet, the observer can perceive not only changes in shape and shading but also deformation of the texture pattern.As depicted in Fig. 4, an observer located at position A, which is close to the normal vector of the triangular facet, can see a mostly undistorted texture pattern.However, another observer located at position C perceives the highly distorted texture pattern because location C is far from the normal axis of the triangular facet.   .Firstly, we set up the referential retina space and its adjacent retina space which are denoted by, ( ) and ( ) , respectively, as shown in Fig. 4(b).
Although the two triangles on the local coordinates of the referential and adjacent retina spaces would appear to have different shapes, their relationship is described by the affine similarity transformation.After determining the three apexes of the triangular facets, ( , , , and ( ) in the referential space and the corresponding apexes, ( ) , , , and ( ) , , .
Equations ( 25)-( 27) are combined as a matrix equation , and it is solved to produce the conversion relationship between the two local coordinates of the referential and adjacent retina spaces as .
In the previous section, how the uniform grid on the local coordinates of object space is nonlinearly mapped to that of retina space was established.From Fig. 3(b), the mapped grids on the local coordinates of the left and right eyes have different aspects because their positions and view directions are apparently dissimilar.However, if both eyes are located near to each other or their view directions are not significantly different, we are able to define the conversion relationship among their local coordinates using Eq. ( 29).It should be noted that there must be some errors in this assumption, which will continue to be estimated.The validity of the affine interocular similarity is analyzed with a numerical simulation, in which it is assumed that the observer watches the center of a triangular facet lying in the xy plane while moving in the designated observation section as shown in Fig. 5(a).Four observation sections are set up which are designated in terms of longitudinal angle θ (0, 15,

Affine-similarity transformation of holographic three-dimensional image light fields
In the polygon CGH synthesis theory [15,16] that we have developed in previous papers, CGH patterns are obtained by propagating the observable holographic image in the retina plane into the CGH plane through the inverse cascaded generalized Fresnel transform [15].Therefore, when we calculate CGH patterns, the majority of the computation time is used in obtaining the holographic image in the retina plane.A complex process is required to calculate the observed image in the retina plane because light field distributions, which are emitted by all of the unit triangles that msake up the 3D object, have to be synthesized in the retina plane.In particular, for multi-view CGH calculations, the computational complexity can be exceptionally high because it is proportionate to the number of views that will be recorded in the CGH pattern.However, the similarity due to the affine transformation can be exploited to significantly improve multi-view CGH calculation.
In this section, an affine-reconfigured polygon CGH is formulated and the validity of the affine approximation and its effect on improving efficiency are tested with a comparison to an exact re-computed CGH model.The approximate light field distribution of the adjacent retina space is derived by referring to that of the referential retina space.First, the angular spectrum representations of the triangular facets The mathematical relationship of ( ) and ( ) .
The substitution of Eq. ( 32) into Eq.( .  , , the equality of the angular spectrum integrals as , exp 2 exp 2 . , , the angular spectrum of the adjacent local field ( ) The result of Eq. ( 36) certifies that the angular spectrum of the adjacent local field ( ) In addition, the spatial coordinate variables ( ) To convey the meaning of the notation more clearly, we attach notation @L behind the adj in ( ) From Eq. ( 38), the components of the Fourier spatial frequency vector ( ) , .
adj adj adj a '' a '' a '' This means that the angular spectrum of the triangle facet in the adjacent global coordinate system is calculated from that of the referential local coordinate system.Therefore, we establish the CGH computation process as follows: (1) prepare the primitive angular spectrum of the referential local coordinate system in advance, (2) compute the light field distribution of the adjacent global coordinate system by reconfiguring the primitive angular spectrum data, and (3) using the inverse cascaded generalized Fresnel transform from the retinal plane to the CGH plane [13,15], convert the light field distribution in the retina plane to the CGH pattern.The intermediate view CGH is not generated by re-computing the entire process, but by reconfiguring the primitive data of the reference observation point.This process is expected to significantly reduce the computational complexity of wide-viewing angle polygon CGHs.
To assess the efficiency of the proposed method, we compared the computing time for a full-color CGH using the exact and approximate methods.In the calculation of the full-color CGH, the red (633nm), green (532nm) and blue (473nm) components of the CGH were independently calculated without color dispersion [13].Similar to Fig. 4, we assumed that a textured triangular facet was floating in object space and an observer is looking at it from a specific location.This computation is performed in MATLAB using a workstation with 2.27GHz Intel Xeon E5520 CPU and 48GByte memory.The size of the single view CGH is 2, 201 2, 201 × .Figure 7 displays the simulation results.Using both methods under the same computational conditions, we simulated the observer looking at specific objects while moving around them.As shown in Fig. 9, the textured cube is floating 5mm above the checker board and the observer is looking at this scene along a diagonal direction toward the floating object.
We assume the observer's rotational range is as 0 to 360 degrees in the azimuthal direction and with an interval of 1 degree.Thus, 360 light field distributions should be calculated for each viewpoint.To accomplish this simulation, 360 times re-computations are required using the exact method.As indicated in Fig. 8(a), the exact method has two steps: (1) obtaining a properly distorted texture pattern on the local coordinates in the observer's retina space and (2) numerically calculating the angular spectrum using a fast Fourier transform (FFT) algorithm and interpolation.The entire process takes 11.8513 seconds.On the other hand, the approximate method has three steps: (1) obtaining the properly distorted texture pattern on the local coordinates in the referential retina space, (2) calculating its angular spectrum with the FFT (this result is regarded as the primitive data) and (3) obtaining the angular spectrum by reconfiguring the primitive data as indicated in Fig. 8(b).ates of a triangu wn in Eq. ( 24).

Fig. 1 .
Fig. 1.Convergence and accommodation in the binocular visual perception system (a) global and local coordinates and (b) adaptive global coordinate system.Let us set the convergence point as Fig. 2 objectThe geom points,(1 1 P x ( ) 1 11 1, ,

(
Fig. 3 conve the lef By using grid on the re presents the m of the two sep fy the non-linear linear grid map for e can draw a n object surface.et into the retin allax for the tw(21) Figure 3 na planes wo distant Fig. 4 linear How an ar in the imaged model develo texture pattern thought to pe change drama observation p In the con share the sam suppose that t conversion re supposed line holographic i original point relationship a a practical po complexity o dramatically i Fig. 5 two gr Under the When the obs on the local uniform grid space.The f approximationThe accur the exact met approximate m shapes of the the outer edge grid is eventu Fig. 6 exact for theThe RMSE reference poi proportionally point and its CGHs.Howe means that th be covered by facets that ma meshes.If the should be div Fig. 7 (b) the Although computation t new method w (3) of the app the adjacent o however, the times for the respectively.approximation proposed algo located at φ distribution o angular spectr

Figure 9
Figure 9 d computed by that object accommodatio not observabl Here, let us develop the mathematical formulation of this strategy based on the geometric mapping transformation of the previous section, . }