High-resolution additive light field near-eye display by switchable Pancharatnam – Berry phase lenses

Conventional head-mounted displays present different images to each eye, and thereby create three-dimensional (3D) sensation for viewers. This method can only control the stimulus to vergence but not accommodation, which is located at the apparent location of the physical displays. The disrupted coupling between vergence and accommodation could cause considerable visual discomfort. To address this problem, in this paper a novel multifocal plane 3D display system is proposed. A stack of switchable liquid crystal Pancharatnam-Berry phase lenses is implemented to create real depths for each eye, which is able to provide approximate focus cue and relieve the discomfort from vergenceaccommodation conflict. The proposed multi-focal plane generation method has great potential for both virtual reality and augmented reality applications, where correct focus cue is highly desirable. © 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (120.2040) Displays; (230.3720) Liquid-crystal devices; (050.1965) Diffractive lenses; (060.5060) Phase modulation. References and links 1. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photonics 5(4), 456–535 (2013). 2. B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). 3. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 1–30 (2008). 4. S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vis. 5(10), 834–862 (2005). 5. M. Mon-Williams, J. P. Wann, and S. Rushton, “Binocular vision in a virtual world: visual deficits following the wearing of a head-mounted display,” Ophthalmic Physiol. Opt. 13(4), 387–391 (1993). 6. E. Downing, L. Hesselink, J. Ralston, and R. Macfarlane, “A three-color, solid-state, three-dimensional display,” Science 273(5279), 1185–1189 (1996). 7. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 220 (2013). 8. H. S. Park, R. Hoskinson, H. Abdollahi, and B. Stoeber, “Compact near-eye display system using a superlensbased microlens array magnifier,” Opt. Express 23(24), 30618–30633 (2015). 9. C.-K. Lee, S. Moon, S. Lee, D. Yoo, J.-Y. Hong, and B. Lee, “Compact three-dimensional head-mounted display system with Savart plate,” Opt. Express 24(17), 19531–19544 (2016). 10. F. C. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60 (2015). 11. S. Lee, C. Jang, S. Moon, J. Cho, and B. Lee, “Additive light field displays: realization of augmented reality with holographic optical elements,” ACM Trans. Graph. 35(4), 60 (2016). 12. N. Matsuda, A. Fix, and D. Lanman, “Focal surface displays,” ACM Trans. Graph. 36(4), 86 (2017). 13. H. Ren and S. T. Wu, Introduction to Adaptive Lenses (Wiley, 2012). 14. S. Ravikumar, K. Akeley, and M. S. Banks, “Creating effective focus cues in multi-plane 3D displays,” Opt. Express 19(21), 20940–20952 (2011). 15. Y. H. Lee, F. Peng, and S. T. Wu, “Fast-response switchable lens for 3D and wearable displays,” Opt. Express 24(2), 1668–1675 (2016). 16. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). 17. G. D. Love, D. M. Hoffman, P. J. W. Hands, J. Gao, A. K. Kirby, and M. S. Banks, “High-speed switchable lens enables the development of a volumetric stereoscopic display,” Opt. Express 17(18), 15716–15725 (2009). 18. S. W. Lee and S. S. Lee, “Focal tunable liquid lens integrated with an electromagnetic actuator,” Appl. Phys. Lett. 90(12), 121129 (2007). Vol. 26, No. 4 | 19 Feb 2018 | OPTICS EXPRESS 4863 #319720 https://doi.org/10.1364/OE.26.004863 Journal © 2018 Received 12 Jan 2018; revised 9 Feb 2018; accepted 10 Feb 2018; published 15 Feb 2018 19. E. Hasman, V. Kleiner, G. Biener, and A. Niv, “Polarization dependent focusing lens by use of quantized Pancharatnam–Berry phase diffractive optics,” Appl. Phys. Lett. 82(3), 328–330 (2003). 20. S. Pancharatnam, “Generalized theory of interference and its applications,” Proc. Indian Acad. Sci. Sect. A Phys. Sci. 44(5), 247–262 (1956). 21. M. V. Berry, “Quantal phase factors accompanying adiabatic changes,” Proc. R. Soc. Lond. A 392 (1802), 45–57 (1984). 22. Y. Ke, Y. Liu, J. Zhou, Y. Liu, H. Luo, and S. Wen, “Optical integration of Pancharatnam-Berry phase lens and dynamical phase lens,” Appl. Phys. Lett. 108(10), 101102 (2016). 23. N. V. Tabiryan, S. V. Serak, D. E. Roberts, D. M. Steeves, and B. R. Kimball, “Thin waveplate lenses of switchable focal length-new generation in optics,” Opt. Express 23(20), 25783–25794 (2015). 24. N. V. Tabiryan, S. V. Serak, D. E. Roberts, D. M. Steeves, and B. R. Kimball, “Thin waveplate lenses: new generation in optics,” Proc. SPIE 9565, 956512 (2015). 25. K. Gao, H. H. Cheng, A. K. Bhowmik, and P. J. Bos, “Thin-film Pancharatnam lens with low f-number and high quality,” Opt. Express 23(20), 26086–26094 (2015). 26. Y. H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S. T. Wu, “Recent progress in Pancharatnam-Berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3(1), 79–88 (2017). 27. T. F. Coleman and Y. Li, “A reflective Newton method for minimizing a quadratic function subject to bounds on some of the variables,” SIAM J. Optim. 6(4), 1040–1058 (1996). 28. J. Kim, Y. Li, M. N. Miskiewicz, C. Oh, M. W. Kudenov, and M. J. Escuti, “Fabrication of ideal geometricphase holograms with arbitrary wavefronts,” Optica 2(11), 958–964 (2015). 29. H. Ren, S. Xu, Y. Liu, and S. T. Wu, “Switchable focus using a polymeric lenticular microlens array and a polarization rotator,” Opt. Express 21(7), 7916–7925 (2013).


Introduction
Head-mounted display is a key part of virtual reality (VR) and augmented reality (AR) devices, serving as a bridge connecting the computer-generated virtual world and the real one.Currently, the three-dimensional (3D) effect is constructed by the principle of binocular disparity [1,2] in most of the commercial VR and AR devices.Although providing different images to different eyes is a popular and effective method to create acceptable depth perceptions, this unnatural method also has considerable drawbacks, such as vergence accommodation conflict [3], distorted depth perception [4], and visual fatigue [5].To overcome these drawbacks, physically real depths must be provided by the display in order to present not only the correct vergence but also the corresponding accommodation.There exist different kinds of technologies with such potential, including volumetric displays [6], integral displays [7][8], light field displays [9][10][11], and focal surface displays [12].In most of these methods, a fast focal length changing device plays a key role for the multi-focal plane displays.Several approaches [13][14][15][16][17][18] for making a tunable lens have been proposed, however, most of them are either too slow or bulky for wearable display applications.
In this paper, a novel multi-focal plane display, based on fast-response switchable Pancharatnam-Berry lenses (PBLs), is proposed, satisfying the need for a fast (< 1 ms) and compact (< 10 cm) 3D display system.First, the basic principles and fabrication processes of PBLs are introduced, which is the key element for generating multiple focal planes.Second, the additive light field generation procedure is described, making use of a constrained linear least squares method.Third, with the factorized light fields, a high-resolution 3D scene is synthesized by a compact light field near-eye display system.

Switchable Pancharatnam-Berry phase lenses
The well-known Pancharatnam-Berry (PB) phase optical elements [19][20][21][22][23][24][25] are half-wave plates whose crystal-axis is changing spatially in a specific way.The basic working principle of PB optical elements can be well explained by Jones calculus as follows: where J + and J − represent the Jones vectors of left-and right-handed circularly polarized light (LCP and RCP), respectively.After passing through a half-wave plate with an azimuthal angle ψ , the Jones vectors are changed to: where ( ) R ψ and ( ) W π are the rotation and retardation Jones matrix, respectively.While the handedness of output light is switched, it also accumulates a spatial-varying phase depending on the local azimuthal angle ψ .Moreover, the PB optical elements have different functions for the RCP and LCP incident beams, because the phase accumulation has opposite signs for each handedness.The PB optical elements can be designed as PBLs as long as the mapping from centrosymmetric parabolic phase distribution to the local azimuthal angle is constructed, as shown in Fig. 1 and Eq.(3): where ϕ , ω , c , r , and F are the relative phase, angular frequency, speed of the light in vacuum, radial coordinate, and focal length, respectively.To make PBLs switchable, homemade fast-response liquid crystals are applied to prepatterned half-wave plate cells.PBLs can be driven actively or passively [23,26], as shown in Fig. 2(c).For active driving, voltages are applied across PBLs, switching the LC directors between a well-defined lens-profile pattern parallel to the substrate (Fig. 1(b)) and homogenously perpendicular to the substrate.For passive driving, an external polarization rotator (PR) (e.g. a combination of quarter-wave plate and twisted-nematic LC cell) is added to switch the handedness of incident circularly polarized light.The optical power of PBL can be switched between 0 and K in active driving, while -K and K in passive driving.By synchronizing a stack of fast switching PBLs and a flat-panel display with high frame rate, a fast-response high-resolution 3D light field display system can be constructed.Specifically, for each sub-frame, a computationally factorized image is presented on the flat panel display at a desired depth by modulating the optical power of PBLs, as shown in Fig. 2(b).

Additive type light field display factorization
With generated multiple focal planes, the image shown on the flat panel display can be assigned to multiple depths, thus a 3D scene can be created.Here, an additive type of light field factorization method is designed to generate all the 2D images for corresponding image depths, which can be combined to reconstruct the targeted 3D light field with the proposed system, as Fig. 2(a) depicts.At different frames, the physical display panel is imaged by the PBLs as virtual panels with different depths.Since all the displays light is from incoherent illumination sources, it can be directly summed together using following equation: where 1 I , 2 I and i I are the intensity of the images in the 1st, 2nd and th i virtual panel.All light rays in 3D space can be parameterized with a point on the x-y plane (reference panel) and two directional angles, which are defined as follows: where ( , , ) x y z r r r r =  is the direction unit vector of final combined rays.The final light fields, which is generated from N-layer of virtual panels, can be precisely described by the following equation: where i I is the intensity of the image from the th i virtual panel, and i h denotes the depth of the th i virtual panel, as illustrated in Fig. 2. In order to computationally generate all the image contents shown in all virtual panels, it is necessary to solve the following optimization problem: arg min , , , In Eq. ( 7), T is the target light field originated from the desired 3D scene captured at K view points in the eyebox, M is the mapping matrix between image contents on virtual panels and generated light field L from the proposed system.Without loss of generality and for a simple example, a mapping procedure of a 3D scene with 5 5  × view points, 2 virtual panels (each has P pixels) is shown in Fig. 3.For a ray generated by the 8th pixel in virtual panel 1 and the 7th pixel in virtual panel 2, the viewer looking from the 8th view point would consider it representing the 9th pixel in the 3D scene, which is determined by the propagating direction of the ray.Since all display contents are assumed to be discrete, the target light field can be represented by a 4D matrix ( 5 5  × view points, and a reference panel with a 2D image for each view point/direction).For convenience, the direction angles of light rays are discretized by the center point of pixels on the reference panel and the corresponding view point.For example, the direction angle of the green ray, shown in Fig. 3(a), is determined by line connecting the center point of the 9th pixel on the reference panel and 8th view point.The contents of target light field, rendered by commercial software (such as 3ds Max TM ), are reshaped into a vector in the order shown in Fig. 3.In this case, the 7P + 9th row in the mapping matrix M , corresponding to the 9th pixel in the 3D scene looked from view point 8, are zeros except for the 8th and P + 7th columns, representing the locations of pixels to be added in two virtual panels.The optimization of Eq. ( 7) appears like a least-squares problem, however, the elements of the vector I must stay within the range [0, 255 2.2 ], because the illumination intensity of a display cannot be less than zero or larger than 255 2.2 (for an 8-bit display with gamma = 2.2).Hence, a well-defined constrained linear least-squares problem is encountered.To solve this problem, a trust-region-reflective algorithm [27] needs to be applied.And a demonstration of the simulation results of this algorithm for 25 view points and 4 virtual panels is plotted in Fig. 4. With 4 virtual panels, the simulated additive light fields are able to provide precise 3D images with high qualities in different viewing angle.Additionally, the performance of the optimization framework is tested with different 3D scenes (Fig. 5(a)) and system setups, the result of which is given in Fig. 5.The simulated image quality is heavily dependent on the content of input 3D scenes, as Fig. 5(b) shows.Since the system's degree of freedom is fixed by the physical display and virtual panel numbers, the compressive rate is higher for more complicated 3D scenes with low redundancy, resulting in output images with low qualities.If more degrees of freedom are provided, for example, by adding virtual panels, the image quality can be improved significantly, as Fig. 5(c) depicts.Moreover, it is found that lowing the brightness (grey levels) of input 3D scenes also has apparent positive impacts on the quality of output images, which can be explained by the relatively loosened constraint of input attributing to the decreased target values.

Fabrication of switchable Pancharatnam-Berry lenses
Photo-alignment method is applied to fabricate the PBLs [28].For both active and passive driving methods, a thin photo-alignment film (PAAD-72, from Beam Company) was spincoated on a transparent substrate.For passive driving, the coated substrate was directly exposed by a desired interference pattern.For active driving, the substrates with transparent electrodes (ITO glass) were assembled to form an LC cell before undergoing the exposure procedure.Figure 6 shows the optical setup.The incident collimated linearly polarized laser beam (λ = 457 nm) was split into two arms after passing through a non-polarizing beam splitter (BS).One beam is converted to LCP by a quarter-wave plate working as the reference beam, while the other is converted to RCP before entering the target lens (Lt), whose focal length is identical to that of the desired PBLs.These two laser beams are supposed to have the same size on the prepared substrate (S), which is coated with a thin photo-alignment film, after being combined together by the 2nd beam splitter.After exposure, for the active driving device, the cell (with indium tin oxide (ITO) electrodes) is filled with a home-made fast-response LC material (UCF-M37, γ 1 /K 11 = 4.0 ms/μm 2 at 22°C) to satisfy the half-wave requirement (dΔn = λ/2; d is the cell gap).While for passive driving, the exposed substrate is coated with a diluted LC monomer (e.g.RM257) and then cured by a UV light, forming a thin cross-linked LC polymer film where a LC cell is not necessary.Since the time-multiplexing method is applied in the proposed system, the response time of PBL should be as fast as possible.For passive driving mode, the response time is limited by the broadband TN polarization rotator, whose response time is typically >2 ms [29].For active driving mode, the response time of the fabricated PBL (d = 1.6 μm, Δn = 0.17) is measured to be 0.54 ms, which is fast enough for a display panel with 1-kHz frame rate.Because of its fast response time and compact system configuration, in this paper the active driving mode (using 2 PBLs) is selected to demonstrate additive light field display.As shown in Fig. 7, the depth information of image content is well-illustrated and the reduction in spatial resolution is negligible as compared to that of the display panel.The detailed parameters of the PBLs and experimental 3D display system are listed in the Table .1.

Conclusion
A novel light field display system is proposed and experimentally demonstrated.This system, benefiting from the fast response time of PBLs, is able to provide high-resolution 3D scenes for viewers.The physical depth provided by PBLs could relieve human visual system from vergence-accommodation conflict, which is a highly demanding requirement.The proposed light field technology has potential applications in virtual reality and augmented reality with the rapidly increasing computation power of electronic devices.

Fig. 1 .
Fig. 1.(a) Example of relative phase change profile of a PBL with 0.5D optical power for LCP 550nm incident light.(b) Example of centrosymmetric spatial distribution of local optical axis in PBLs, in which azimuthal angles are approximately proportional to the square of the radius.

Fig. 2 .
Fig. 2. (a) The principle of the additive light field display is illustrated with a stack of PBLs.Each virtual image panel, formed by a specific state of PBLs stack, generates independent additive light fields, which are merged into a single light field.(b) Time-multiplexing driving scheme of 4 additive virtual panels.(c) Illustration of active and passive driving modes of PBLs.

Fig. 3 .
Fig. 3. Schematic diagram of additive light field mapping procedure.(a) A ray, generated from the 8th pixel in virtual panel 1 and the 7th pixel in virtual panel 2, appears like the 9th pixel in the 3D scene when seen from the 8th view point.(b) Matric description of merging the pixels in 2 virtual panels into the final light field.All the pixels in virtual panels are reshaped into a single vector in the way shown in the figure.Each row of the mapping matrix is calculated by the structure of the virtual panels.

Fig. 4 .
Fig. 4. (a) Simulation results of additive light field display with 25 view points and 4 virtual panels.The 25 images of the 3D scene (three teapots with different depths) is rendered computationally.(b) Optimized images to be displayed on the 4 virtual panels (depth increases from left to right).

Fig. 5 .
Fig. 5. (a) 3D scenes used for testing when observed from the center view point.(b) Relation of normalized brightness and PSNR with different contents.(c) Relation of normalized brightness and PSNR with different numbers of virtual panels (depths).

Fig. 6 .
Fig. 6.The optical setup of exposure procedure in the PBL fabrication process.

Fig. 7 .
Fig. 7. Experimental results of the high-resolution additive light field 3D display system.The optimized images for the discrete virtual panels in Fig. 4(b) are utilized in this demonstration.The focal depth of the camera increases from (a) to (c).Red pot is the closest to the viewer while the green one is the farthest.