Doubling the optical efficiency of VR systems with a directional backlight and a diffractive deflection film

: We demonstrate an approach to double the optical efficiency of virtual reality (VR) systems based on a directional backlight and a diffractive deflection film (DDF). The directional backlight consists of a commercial collimated light-emitting diode (LED) array and a two-layer privacy film, while the DDF is a three-domain Pancharatnam-Berry (PB) phase lens. Such a PB phase lens was fabricated by the zone exposure and spin-coating method. The focal length of each domain is designed according to the imaging optics of the VR system. Our approach works well in both Fresnel and “pancake” VR systems. We also build the corresponding models in LightTools, and the simulation results are in good agreement with experiment. In experiment, we achieved a 2.25x optical efficiency enhancement for both systems, which agrees with the simulation results (2.48x for Fresnel and 2.44x for “pancake” systems) well. Potential application for high efficiency VR displays is foreseeable.

only occupies a small portion of the display pixel's angular spectrum. Therefore, only a fraction of the emitted light from the display panel is captured by the observer, resulting in a low light efficiency. To enhance the light efficiency, a directional backlight (BL) display is preferred [12][13][14]. Thanks to the directional BL, most of the light emitted by the central area will enter the pupil. However, the light efficiency of peripheral pixels cannot be improved, due to the mismatch between the primary emission direction and the corresponding chief ray direction as Fig. 1(b) shows. To overcome this problem, the primary emission direction of pixels on the display should be modified, so that the peripheral pixels can also be steered into the pupil (Fig. 1(c)). Moreover, when a directional BL is used, bending the curvature of display panel [15,16] can also improve the light efficiency, as Fig. 1(d) depicts. Nonetheless, this solution demands a two-dimensional (2D) curved display, which is challenging to fabricate. Earlier, Zhan et al. proposed a concept to use liquid crystal optics to enhance the optical efficiency of near-eye displays [12]. In this paper, we demonstrate a VR system using a commercial BL and a specially designed zoned DDF to prove its feasibility. We build two types of VR systems with Fresnel lens and "pancake" lens, respectively. In these two systems, we use three kinds of angular distributions corresponding to the situations described in Fig. 1(a), 1(b) and 1(c). The first panel is a conventional twist nematic (TN) liquid crystal display [17]. The second one uses the same TN panel but with a directional BL, which has a narrower angular bandwidth. In contrast to the second approach, the third one has one more diffractive deflection film (DDF) on the top of the TN panel, so that the primary emission directions are modulated in different areas. With the help of the directional BL and the DDF, the light efficiency of both VR systems can be doubled.

Diffractive deflection film
The DDF we developed is essentially a multi-domain Pancharatnam-Berry (PB) phase lens [18,19], where the focal length varies within different radius range. Figure 2 shows the pattern exposure setup for fabricating such a multi-domain PB phase lens. In experiment, we designed a three-domain PB lens; the size and the area of each domain are the same as the photomask shown in Fig. 2. The three-domain mask was fabricated by a femtosecond laser (Pharos, Light Conversion). The laser beam (wavelength 1030 nm, pulse duration 170 fs, repetition rate 1 kHz, and average power 600 mW) was focused on a plain photomask (CAD/Art Services, Inc) with a focal length of 1000 mm. The photomask was attached to a 3D translation stage for precise scanning. Each of the three rings on the photomask was cut at a scanning speed of 10 mm/s in multiple runs till it gets completely through. All these three domains (1, 2, and 3) are concentric, and their outer diameter is 22 mm, 32 mm, and 40 mm, respectively. After the alignment layer (0.2% brilliant yellow dissolved in Dimethyformamide) was spin-coated onto the top surface of a clean substrate (2-inch by 2-inch), the photomask was adhered to the back surface of the sample substrate. Then the substrate with the photomask was placed at three different positions (1, 2, and 3 as marked in Fig. 2) for the holography pattern exposure [20][21][22]. At each position, the corresponding domain of the photomask was opened, so that only one domain was exposed at a time. Due to the position difference during exposure, the focal length of each domain is different. In this way, a three-domain PB phase lens pattern was recorded on the alignment layer. The focal length of each domain depends on the imaging optics, and its value will be specified later. After the pattern exposure, we spin-coated one layer of reactive mesogen mixture (RMM) onto the substrate. Then, UV photo-polymerization process was applied to cure and stabilize the polymer film. The components in the RMM include 95% reactive mesogen RM257 (from LC Matter), 4.9% photo-initiator Irgacure 651 (from BASF), and 0.1% surfactant Zonyl 8857A (from DuPont). This RMM was dissolved in toluene, and the ratio of solute to solvent was around 1:2.5. The polymer film thickness is optimized at a green wavelength. In practical applications, a multi-twist structure can be adopted to achieve broadband performance [23], and the chromatic aberration can be corrected by laminating a diffractive optical film to the refractive optical element [20].  In the photo, the imaging background is the floor tiles of a corridor, and we can see that the focal length of each domain is different because the tile edge shows a sharp jump at the boundary between domains. The focal length of the 1, 2 and 3 domains is 12 cm, 17 cm, and 35 cm, respectively. The choice of a focal length depends on the chief ray direction of each pixel. In our lab, we do not have the setup to fabricate a pixel-level DDF, which requires a direct writing equipment [24]. We calculate the corresponding "focal length" (f = d/tan(θ), where f is the "focal length", d is the distance between the pixel and the display center, and θ is the deflection angle) for the pixels along the radius direction, and the "focal length" does not change abruptly. For the pixels within a certain area, their corresponding "focal lengths" are similar. Therefore, we can use a PB phase lens to deflect the primary emission direction of the pixels in this area. Based on the "focal length" variation along the display radius, we divide the display into three domains, so that the primary emission direction of most pixels on the display can be deflected to the desired direction. Figure 3(b) is a photo of the lens pattern in the central area of domain 1, captured by a polarization microscope (OLYMPUS BX51). Figure 3(c) shows the microscope image at the boundary area. It is clearly shown that the alignment quality is degraded at the boundary area. This is because the photomask domains do not match each other perfectly and the boundary is exposed twice. The width of the boundary is about 150 µm. However, the impact of such a narrow boundary to our experimental results is negligible.

Fresnel lens-based VR system
In this section, we describe the three VR systems we built using a Fresnel lens. These three systems use the same TN panel as the display but have different emission angular spectra by using a conventional or directional BL with and without a DDF ( Fig. 4(a)). For consistency, a quarter-wave plate (QWP) film was laminated to the TN panel, which converts a linearly polarized light to a left-handed circularly polarized (LCP) light. Because such a three-domain PB lens is polarization dependent [20,25,26], only the LCP light will see the positive lens effect in the design. Figure 4(b) shows the Zemax profile of the Fresnel lens we used.
For the first and the second Fresnel VR systems, they do not use a DDF to modulate the primary emission direction of each pixel. All the pixels on the panel have the primary emission directions perpendicular to the panel's surface. The difference of these two Fresnel VR systems is that they use different BLs with different angular distributions and luminous flux. The first one uses an edge-lit BL (Adafruit Industries) with two brightness enhancement films [27]. The second system uses a directional backlight. This directional BL consists of a collimated LED backlight (Edmund Optics #14270) and a two-layer privacy film (3M PF170C4B). The reason we added such a two-layer privacy film is to narrow down the angular spectrum of the collimated LED BL. The angular spectra of these two BLs were measured by a goniophotometer (TechnoTeam Vision), and results are shown in Fig. 5(a). According to the normalized angular spectrum, the conventional BL has a full width at half maximum (FWHM) of about ±24°, while the directional BL has about ±14°. After integrating the angular spectrum of two BLs in the spherical coordinates, the total luminous flux of the directional BL is about 30.7% higher than that of the conventional one. To make a fair comparison, we adjusted the gray level of TN panel so that the total luminous flux of these two BLs is the same. The Fresnel lens was inserted between the display and the observer as Fig. 4(a) shows. The distance between the display and the Fresnel lens is 35 mm, and the eye relief is 14 mm. The images at the eye pupil of these two VR systems with different BLs are shown in Fig. 5(b) (conventional BL) and 5(c) (directional BL). When we took these two photos, the settings of camera remained the same. From the figures, we can see a significant light efficiency enhancement when the directional BL is applied to the VR system. To quantify the light efficiency enhancement introduced by the directional BL, we also established a VR simulation model in LightTools, as Fig. 6 depicts. In this model, we use a linear light source consisting of 21 point sources with 1-mm gap in between. As a result, the corresponding half field of view (FOV) is 35°, which is less than that of our Fresnel lens (half FOV ≈ 50°). This is because the size of our DDF substrate and directional BL is not large enough to cover the entire FOV of the Fresnel lens. The reason we use point sources to represent a line source is that the primary emission direction changes slowly along the radius direction, and the surrounding pixels (the distance to the corresponding point source is less than 1 mm) have similar primary emission direction as the point source. In our system, when the pixels have the same angular distribution, it is the primary emission direction that influences the light efficiency. Therefore, we set the gap between these point sources to be 1 mm, which is dense enough to represent the experimental condition. The eye pupil (the receiver in the simulation, the entrance pupil of the camera in the experiment) size is 4 mm. During simulation, the angular distributions of the conventional BL and the directional BL in Fig. 5(a) are imported into our LightTools program. All other parameters such as light source power, lens, and receiver settings in the systems remain the same. After calculation, the power on the receiver when using the directional BL is 1.91x higher than that using the conventional BL. In experiment, a CMOS (Complementary Metal Oxide Semiconductor) camera (TechnoTeam Vision, LMK6 color) was used to detect the color imaging and luminance at the eye pupil. The color imaging results using a conventional BL and a directional BL are shown in Fig. 7(a) and 7(c), respectively, and the corresponding luminance distributions are shown in Fig. 7(b) and 7(d). According to the measurement, the average luminance of imaging pixels is 83.7 nits for the conventional BL ( Fig. 7(b)), while this value is 183.8 nits for the directional BL ( Fig. 7(d)). On the other hand, the luminous flux of the directional BL is about 30.7% higher than that of the conventional BL. Therefore, the light efficiency of the directional BL based VR system is 1.68x higher than that of the conventional BL. The simulation results agree with experiment reasonably well.
Next, we added a DDF to the top of the display panel as Fig. 4(a) depicts, which uses a directional BL. This DDF has the same size and diameter as the one shown in Fig. 3(a), but the focal length of each domain is 7 cm, 8 cm, and 10 cm, respectively (from domain 1 to domain 3), which is designed according to the chief ray directions shown in Fig. 4(b). To make the comparison more conveniently, we used a blade wrapped by a cleaning tissue, which was rinsed with acetone to remove half of the DDF off the substrate. As a result, only half of the display panel still has the DDF to correct the primary emission angle of the pixels, while the other half does not. Figure 8(a) shows the imaging result and Fig. 8(b) shows the luminance distribution. The left part of imaging pixels (w/o DDF) has an average luminance of 165.5 nits, while the right part (w/ DDF) has 221.8 nits. After comparison, the light efficiency enhancement is 34.0% by adding a DDF. In simulation, we adjust the primary emission direction of each point source based on Fig. 4(b), where we can read the primary emission (chief ray) directions of the corresponding pixels from Zemax. The power detected by the receiver is increased by 30.0%, which is consistent with the experimental data. The reason our experimental result is slightly higher than the simulated one is that the peak luminous intensity of our directional BL ( Fig. 5(a) blue line) is not at the primary emission direction (normal direction), and the surrounding emission directions have higher luminous intensities than the primary emission direction. Since our three-domain PB phase lens is not an ideal DDF, which will introduce some mismatching between the primary emission direction and the chief ray direction (Fig. 4(b)) for some pixels, so that the surrounding emission directions with higher luminous intensities can match the chief ray direction in experiment. However, in simulation we can ideally control the deflection angle of each light source, and the primary emission direction exactly matches the chief ray direction. Thus, the light efficiency enhancement obtained from simulation is somewhat higher than that from experiment.

"Pancake" VR system
The "pancake" VR system has a similar structure to the Fresnel system shown in Fig. 4(a) but adopting a "pancake" lens as the imaging optics. The Zemax profile is plotted in Fig. 9(a).
The front surface of this "pancake" lens is a reflective circular polarizer, which reflects the right-handed circular polarized (RCP) light, while transmitting the LCP light. The back surface of the pancake lens is a half mirror. The distance between pancake lens and display panel is 16 mm, and the eye relief is 13 mm. A conventional BL and a directional BL (same as what we used in the Fresnel system) were used in the system display, respectively. Their imaging results are shown in Fig. 9(b) (conventional BL) and Fig. 9(c) (directional BL). The light efficiency enhancement by using a directional backlight is quite significant. Similar to what we did in the Fresnel system, a "pancake" VR model was built in LightTools to quantify the light efficiency improvement introduced by the directional BL. In this model, we use a 19-mm-long linear light source comprising of 20 point sources with 1-mm gap in between, which corresponding to a 50°half FOV. The reason the "pancake" VR system can achieve 50°half FOV is that the aperture size of the "pancake" lens is small, so that the size of the DDF substrate and the directional BL is no longer a limitation. The eye pupil size is still 4 mm. During simulation, the angular spectra of conventional BL and directional BL are the same as those we use in the Fresnel system. After calculation, the power on the receiver when using the directional BL is 2.39x higher than that of the conventional BL. In experiment, the imaging results of conventional BL and directional BL are shown in Fig. 10(a) and 10(c), respectively. The corresponding luminance distribution is shown in Fig. 10(b) and 10(d). After evaluation, the average luminance of imaging pixels is 23.4 nits for the conventional BL ( Fig. 10(b)), and 67.4 nits for the directional BL ( Fig. 10(d)). Considering that the directional BL has a 30.7% higher luminous flux, the light efficiency of the directional BL "pancake" VR system is 2.20x higher than that of the conventional BL. Therefore, the simulation results are in good agreement with the measured data.
To study the light efficiency enhanced by the DDF in a "pancake" VR system, we added a DDF on the top of the display panel. This DDF is the one shown in Fig. 3(a). The focal length of each domain (from 1 to 3) is 12 cm, 17 cm, and 35 cm, respectively. Similar to the Fresnel system, we wiped off half of the DDF on the substrate for the convenience of comparison. However, a major difference between the Fresnel lens and the "pancake" lens is that the latter is strongly polarization selective, but the former is not. Thus, we choose RCP as the input light in the "pancake" system. On the other hand, our three-domain PB phase lens is also a polarization dependent device. In our design, this three-domain PB phase lens is converging only for the LCP input. Therefore, we need to select a proper circularly polarized input light based on whether it will pass through the DDF. Such a system structure is shown in Fig. 11. After passing the TN panel, the 45°linearly polarized light is converted to LCP or RCP by arranging the fast axis of QWP film either in horizontal or vertical direction. Half of the input light, say RCP, passes through the glass substrate and its polarization state remains unchanged. The other half of the input light (LCP) passes through the DDF, and its polarization state changes from LCP to RCP. Therefore, all the input light to the "pancake" lens has the same polarization (RCP). Fig. 11. Schematic of Pancake VR system with a directional BL and a half DDF. Figure 12(a) shows the imaging result and Fig. 12(b) is the measured luminance distribution. The average luminance of the left part (w/ DDF) imaging pixels is 59.45 nits, and the right part (w/o DDF) is 58.08 nits, which indicates the light efficiency is improved by 2.36% by adding the DDF. During simulation, we adjust the primary emission direction of each point source according to the chief ray direction in Fig. 9(a). The power detected by the receiver is increased by 2.12%, which is consistent with the experimental data. The main reason we do not see a significant improvement after adding the DDF is because the "pancake" lens is designed to be telecentric [28], so that the chief ray direction in Fig. 9(a) is already very close to the primary emission direction of the display panel.

Discussion
We summarize the experimental and simulation results in Table 1. Here, we normalize the light efficiency to a conventional BL for both Fresnel and "Pancake" VR systems. There are two steps in our light efficiency enhancement process. The first step is to apply a directional BL, and the second step is to add a DDF. According to the normalized results in Table 1, we find that after these two steps the Fresnel system and the "pancake" system exhibit a similar light efficiency enhancement. The difference is how much enhancement is achieved in each step. For the Fresnel system, both steps contribute an obvious enhancement, while most of the enhancement for the "pancake" system occurs in the first step. This is because the "pancake" lens is more telecentric than the Fresnel lens in our experiment. In other words, the telecentric design strengthens the effect of the directional BL, but impairs the enhancement brought by the DDF. However, the telecentric design would cause more aberration. With the help of a DDF, the "pancake" lens does not need to be telecentric anymore, so that the VR display can achieve a better image quality. Furthermore, we notice that after applying the directional BL and DDF, the light efficiency enhancement of the Fresnel and the "pancake" systems is similar. The enhancement is not heavily related to the employed imaging optics but is largely dependent on the BL's angular distribution and the solid angles corresponding to the eye pupil (θ 1 in Fig. 4(b) and θ 2 in Fig. 9(a); for convenience, here we use the half apex angle to represent the corresponding solid angle). By integrating the angular distribution in the solid angle θ 1 of the directional BL and the conventional BL, we find that the luminous flux of the directional BL is 3.18x higher than that of the conventional BL. Since the directional BL has a 30.7% higher total luminous flux than the conventional one, the normalized result should be 2.43x (3.18 ÷ 1.307 = 2.43). This value is consistent with the simulated one (2.48x) of the Fresnel VR system using a directional BL and a DDF. Next, we repeat the same process for the "pancake" system. The luminous flux of the directional BL is 2.39x higher than that of the conventional BL (the 30.7% higher luminous flux of directional BL has already been included), which is obtained by integrating the angular distribution in the solid angle θ 2 of these two BLs. This result is also consistent with the simulated one (2.44x) shown in Table 1.
It is worth mentioning that the light efficiency enhancement brought by the DDF benefits the peripheral pixels. The intensity distribution at the eye pupil is simulated by LightTools, and results are shown in Fig. 13(a). From Fig. 13(a), we can see that the maximum intensity is not located at the pupil center, because the maximum intensity on the receiver will shift toward the positive Y direction as the position of light source is far away from the optical axis of the lens (Fig. 6). This also explains why the vignetting is much more serious in the Fresnel system ( Fig. 7(c)) after applying a directional BL. As a matter of fact, the vignetting is not so serious as the experimental imaging results shown in Figs. 7, 8, 10 and 12, when observed by human eye. In experiment, the CMOS camera has an ultra-wide FOV objective lens (∼±60°FOV), which would introduce a significant aberration for the large FOV pixels. With the help of DDF, the primary emission direction can be corrected, so that the maximum intensity returns to the pupil center ( Fig. 13(b)), thereby alleviating the vignetting problem. For the "pancake" system, the vignetting is not so serious as the Fresnel system, because the "pancake" lens has a telecentric design, but we can still see the maximum intensity shifting from the pupil edge to the center according to the simulation results in Fig. 13(c) and 13(d). From the experimental results shown in Fig. 12(b), we also notice that the light efficiency enhancement is contributed by the peripheral pixels. Fig. 13. Simulated intensity distribution at eye pupil for the Fresnel system (a) before and (b) after adding a DDF, and "pancake" system (c) before and (d) after adding a DDF. All these studies are with a directional BL.
Currently, our directional BL has a FWHM of about 14°. But the half apex angle of the solid angle corresponding to the eye pupil in these two systems (θ 1 in Fig. 4(b) and θ 2 in Fig. 9(a)) is about 3°to 5°. Therefore, a portion of the BL is still wasted. If the FWHM of the directional BL can be reduced from 14°to 6°−10°, then the light efficiency enhancement will be more significant. However, a narrow angular distribution will result in a small eyebox. Therefore, the tradeoff between light efficiency and eyebox size should be taken into consideration.
The underlying principle for enhancing optical efficiency is to reduce the etendue waste in the system. A perfect system makes full use of the etendue from image generation unit to eyebox. Figure 14 shows an ideal VR system with "zero etendue loss". In this system, the primary emission direction β and angular width 2θ of the display are a function of r, which represents the distance from the corresponding pixel to the display center. According to the geometry shown in Fig. 14, we obtain the following relations: where d is the eye relief distance, f is the focal length of the optical lens, and D is the diameter of the eye pupil. From Eq. (1) and Eq. (2), we find β = 0 when d = f, which is independent of position, i.e., it is a telecentric system. Based on Eq. (1) and (2), we can optimize the DDF and the BL angular distributions to enhance the efficiency of a VR system. In our approach, we apply a PB-based DDF to tailor the emission pattern to transmit more etendue into the eyebox. Based on this principle, other types of DDF, metasurface, and curved display can also be used to enhance the VR system's optical efficiency. In addition, the display employed in the system is not limited to LED backlit LCD. Other types of displays such as mini-LED [29,30] and micro-LED [31] are also suitable if they have a reasonably narrow angular distribution.

Conclusion
We have achieved a significant light efficiency enhancement in both Fresnel and "pancake" VR systems with the help of a directional BL and a DDF. The employed directional BL has an angular bandwidth of ±14°(conventional BL is ±24°). A three-domain PB phase lens is used as the DDF, and its focal lengths are designed according to the imaging optics of the system. In experiment, we obtained a 2.25x light efficiency enhancement for both Fresnel and "pancake" systems, which agrees well with our simulation results (2.48x for the Fresnel system and 2.44x for the "pancake" system). All the elements we utilized are cost effective and easily available. Widespread application of our approach to enhance the light efficiency of VR displays, especially for the "pancake" system, is foreseeable.