Minimizing eyestrain on a liquid crystal display considering gaze direction and visual field of view

Abstract. Recently, it has become necessary to evaluate the performance of display devices in terms of human factors. To meet this requirement, several studies have been conducted to measure the eyestrain of users watching display devices. However, these studies were limited in that they did not consider precise human visual information. Therefore, a new eyestrain measurement method is proposed that uses a liquid crystal display (LCD) to measure a user’s gaze direction and visual field of view. Our study is different in the following four ways. First, a user’s gaze position is estimated using an eyeglass-type eye-image capturing device. Second, we propose a new eye foveation model based on a wavelet transform, considering the gaze position and the gaze detection error of a user. Third, three video adjustment factors—variance of hue (VH), edge, and motion information—are extracted from the displayed images in which the eye foveation models are applied. Fourth, the relationship between eyestrain and three video adjustment factors is investigated. Experimental results show that the decrement of the VH value in a display induces a decrease in eyestrain. In addition, increased edge and motion components induce a reduction in eyestrain.


Introduction
Currently, various display devices, such as the plasma display panel (PDP), liquid crystal display (LCD), light-emitting diode, active-matrix organic light-emitting diode, and stereoscopic TV, are being manufactured.The use of these display devices is becoming increasingly widespread, with the devices being rapidly adopted for laptop computers, mobile phones, high-definition TV (HD TV), and so on.Many manufacturers and consumers are interested in the attributes of these display devices, including their field of view, spatial resolution, response speed, and degree of motion blur.In addition to these kinds of quantitative characteristics, consumers expect good display capability in terms of human factors.
2][3] Other studies investigated the relationships between the eyestrain caused by an LCD device and video factors such as brightness, contrast, saturation, hue, edge difference, and scene changes. 4,5In addition, the eyestrain caused by a stereoscopic display was examined using a subjective measurement method, optometric instrument-based measurement method, optometric clinically based measurements, and brain activity measurements. 6,7In previous research, the eyestrain caused by twoand three-dimensional (2-D and 3-D) displays was compared using the average blinking rate (BR). 8However, most previous studies did not consider human visual information, such as the gaze position and the visual field of view, for estimating eyestrain.For instance, Lee and Park measured eyestrain on the basis of the change in pupil size in relation to the changes in four adjustment factors: brightness, contrast, saturation, and hue. 5 However, each factor was calculated from the whole image in the display without considering the influence of the human gaze position.Other factors, such as edge difference and scene change, were also calculated from the whole image in the display. 4In other words, these studies were conducted under the assumption that every region in a given image on the display was perceived equally by the subject.To overcome this problem, a new eye foveation model is proposed here that considers a user's gaze position and the error of gaze detection.Three video adjustment factors-variance of hue (VH), edge, and motion information-are extracted from the successive images in the displays in which these eye foveation models are applied.
This article is organized as follows.In Sec. 2, the proposed device for gaze tracking and eye response measurement and the methods of analysis are presented.In Sec. 3, the methods for extracting video features, considering the gaze position and the foveation-based visual field of view, are explained.The experimental setup and results are presented in Sec. 4. Finally, Sec. 5 presents the conclusion of this article and the plans for future work.
2 Proposed Device and Analysis Methods

Device for Measuring Gaze Position and Eye Response
0][11] The eye-capturing camera is attached to an eyeglass frame near the lower part of one eye, as shown in Fig. 1.The camera is a small web camera with universal serial bus port that captures the images at a speed of 15 frames∕s.The spatial resolution of the captured image is 640 × 480 pixels.A zoom lens is used to capture the magnified images of the eye.To screen out visible light, a near-infrared (NIR) passing filter is attached to the camera lens. 8,9-11Figure 2 shows an example of the experimental setup.0][11] They do not affect the user's vision because an NIR light of 850 nm does not dazzle the user's eye.The four NIR illuminators produce four corneal specular reflections, as shown in Fig. 3, which represent the rectangular area of display since these illuminators are attached to its four corners. 8,9

Gaze Tracking Method
As a user-dependent calibration, each user first gazes at a central position on the display, which is required to compensate the angle kappa, which is the angular offset between the visual and the pupillary axis. 9,110][11] Figure 3 shows the four specular reflections of the four NIR illuminators attached to the corners of the LCD screen.These reflections are located by binarization, component labeling, and size filtering. 9The four specular reflections represent the rectangular area of the display.Therefore, on the basis of the detected pupil center and the four specular reflections, the user's gaze position on the display is calculated according to the geometric transform between the rectangle formed by the four reflections and the rectangle of the display. 9,11

Eye Response Measurement
In this research, the average eye BR is used for measuring eyestrain.In previous researches, 12,13 the increase in the BR can be observed as the function of time on task.Based on these researches, previous studies measured eyestrain, with more frequent blinking corresponding to greater eyestrain. 2,4The average BR is calculated in a time window of 60 s; the time window here is moved with an overlap of 50 s.

Extraction of Video Features by Considering
Gaze Position and Visual Field of View

Contrast Sensitivity Model Based on Foveation
To measure visual sensitivity according to the gaze position and angular offset, it is necessary to determine the function of retinal eccentricity.For this, previous research on visual sensitivity is referenced, which showed that visual sensitivity reduced as the distance from the gaze position increased.5][16][17] In this research, eyestrain is measured by calculating a user's gaze position and by determining the user's visual information on the basis of foveation.Humans perceive a dramatic decrease in their visual sensitivity in areas away from the point of gaze.In detail, the point of gaze is perceived with high resolution, but the perceived degree of resolution is decreased according to the increase in the distance from this point.Accordingly, a foveation (visual field of view) model based on the gaze information is defined.[16] CTðf; eÞ 5][16] The optimal fitting parameters are determined on the basis of previous research (α is 0.106, e 2 is 2.3, and   CT 0 is 1∕64). 14,16The CS is defined as the reciprocal proportion of the CT. 14,16ðf; eÞ ¼ 1 CTðf; eÞ : To apply these models to an image, the eccentricity needs to be calculated for any point x ¼ ðx 1 ; x 2 Þ T (pixels) in the image.Because a user's gaze position is the foveation point, , the distance from x to x f is given by the following equation: 14,16 dðxÞ ¼ kx − x f k 2 : (3) Further, the eccentricity is obtained as follows: 14,16 eðv; where N is the width of the image and v is the viewing distance (measured in image width) from the eye to the image plane. 14,16The cut-off frequency f c , which is an unperceivable high-frequency component, can be obtained by setting CT as 1 (the maximum possible contrast) in Eq. ( 1): 14,16 f c ðeÞ ¼ According to the Nyquist-Shannon sampling theorem, the highest frequency that meets the display Nyquist frequency is as follows: 14,16 f d ðvÞ ¼ πNv 360 : Combining Eqs. ( 5) and ( 6), the final cut-off frequency f m is obtained as follows: 14,16 f m ðv; xÞ ¼ minff c ½eðv; xÞ; f d ðvÞg: Finally, the foveation-based error sensitivity is defined in the following equation 14,16 and in Fig. 4 In Fig. 4, a brighter region represents higher contrast sensitivity.

New Foveated Weighing Model in the Wavelet Domain by Considering Gaze Detection Error
A foveation-based visual sensitivity model in the wavelet domain has been proposed previously as follows: 14,16 Sðv; xÞ ¼ ½S w ðλ; where λ is the wavelet decomposition level and θ represents the LL, LH, HH, or HL subbands of the wavelet transform.β 1 and β 2 are the parameters that control the magnitudes of S w and S f , respectively. 14,16The LL subregion has lowfrequency components in both horizontal and vertical directions.The HH subregion includes high-frequency components in the horizontal and vertical directions.The HL subregion comprises high-frequency components in the horizontal direction and low-frequency components in the vertical direction.Finally, the LH subregion contains lowfrequency components in the horizontal direction and high-frequency components in the vertical direction. 18  The explanations given in Eqs. ( 1)-( 10) represent the conventional foveation model of Refs.14 and 16, but they do not consider the errors in gaze detection when calculating the foveation model.In general, there inevitably exists an error in gaze detection between the ground-truth position and the calculated gaze position. 9-11However, the above foveation-based visual sensitivity model of Eqs. 9 and 10 and Fig. 4 does not consider this error.
Therefore, we propose an eye foveation model that considers the gaze position and the error in detecting it, as follows.Since N is the width of an image and v is the viewing distance (measured in image width) from the eye to the image plane, 14,16 Nv is the calculated Z distance from the user's eye to the image plane.Assuming that ε is the accuracy of the gaze tracking (degrees), the consequent gaze detection error is calculated as Nv tan ε.In the range of the gaze detection error (Nv tan ε), all the positions (x) should be treated as the same for the foveation (user's gaze) position (x f λ;θ ) since the error boundary is Nv tan ε.Thus, d λ;θ ðxÞ of Eq. ( 10) becomes 0. Consequently, Eq. ( 10) is rewritten as Eq. ( 11), considering the gaze detection error: Based on Eqs. ( 9) and ( 11), the foveation-based contrast sensitivity mask of the single foveation point (gaze point) in the wavelet domain is found as shown in Fig. 5(b).The fourlevel discrete wavelet transform (DWT) based on Daubechies wavelet bases is used.Brightness indicates the importance of the wavelet coefficients.Higher-contrast sensitivity is shown as a brighter gray level.

Extracting Video Features Considering the Eye Foveation Model
In this research, eyestrain is measured in relation to the changes in the three adjustment features of video: VH, edge, and motion information.To extract features considering gaze position and foveation, foveated images are obtained as follows.
The original color image is first separated into three images of red, green, and blue channels.These three images are decomposed using a DWT based on Daubechies wavelet bases.
The decomposed three images are multiplied by the foveation-based contrast sensitivity mask of Fig. 5(b).From these three foveated images, three images of the red, green, and blue channels in the spatial domain are obtained by the inverse procedure of DWT. 18With these three images in the spatial domain, the hue image is obtained based on the conversion matrix of RGB to hue, saturation, and intensity (HSI), 18 and the VH is obtained as the first feature.
To obtain the motion component (MC) and edge component (EC), the original RGB color image is first transformed into a gray one, and the gray image is decomposed using a DWT based on Daubechies wavelet bases.The decomposed (gray) image is multiplied by the foveation-based contrast sensitivity mask of Fig. 5(b).Figure 6 shows an example of the original gray image and the corresponding foveated one by the proposed method.From the foveated image, the gray image in the spatial domain is obtained by the inverse procedure of DWT. 18The MC and EC are extracted as the second and third features, respectively, from the gray image in the spatial domain.The average magnitude calculated by the Canny edge detector in a gray image is determined as the value of EC.The average pixel difference between successive gray images is determined as the value of MC.The VH is averaged in a time window of 60 s, and the time window is moved with an overlap of 50 s, as in the method for measuring BR.The MC and EC are also obtained by the same method.Using the calculated features of the foveated images, the eyestrain based on the average BR (Sec.2.3) is measured in relation to changes in the three adjustment features of video: VH, MC, and EC.
Figure 7 shows some examples of extracted features in video images captured by a commercial web camera.Figure 7(a) shows an original image.Figure 7(e) shows an original gray image including the foveation point as a white crosshair.Figure 7(f), 7(g), and 7(h), respectively, shows the hue image, motion image, and edge image obtained from the foveated one by the conventional foveated model. 14,16The measured feature values of VH, MC, and EC of Fig. 7(f), 7(g), and 7(h) are 16879.22,14.43, and 9, respectively.
Figure 7(i), 7(j), and 7(k), respectively, shows the hue image, motion image, and edge image obtained from the foveated one by the proposed foveated model.The measured feature values of VH, MC, and EC of Fig. 7(i), 7(j), and 7(k) are 16858.78,15.31, and 11.15, respectively, which are different from those determined by the previous method, 14,16 not considering the gaze tracking error.
To measure eyestrain in this research, a commercial 19-in LCD monitor and a commercial movie file were used.The environmental lighting condition was maintained without any external illumination.The temperature and humidity were kept constant, and there was no vibration or bad odor that could affect the experiments.Each subject watched the movie for 25 min 30 s.The data of eye response were collected from 24 subjects [average age of 26.54 (standard deviation: 2.24); minimum and maximum ages were 23 and 31, respectively].To remove the dependency of watching distance (from the user's eye to the monitor) while considering the actual cases of watching distances, the data of 12 subjects were obtained at a watching distance of 60 cm, and the data of the remaining 12 subjects were collected at a distance of 90 cm.

Experimental Results
As mentioned in Sec.2.3, in previous researches, 12,13 the increase in BR can be observed as the function of time  on task.Based on these researches, previous studies measured eyestrain, with more frequent blinking corresponding to greater eyestrain. 2,4Accordingly, the eyestrain based on BR was measured according to extracted features (VH, MC, and EC).To validate the relationship between these three features and eye responses, a correlation analysis was performed.In this analysis, the correlation coefficient ranges from −1 to þ1.A correlation coefficient close to þ1 indicates that two variables are positively related; if it is close to −1, it indicates that two variables are negatively related.If it is close to 0, there is no relationship between the variables.Table 1 shows the relationship between these three features and eye responses, in which the results are calculated by removing outliers based on the confidence interval of 95%.
Because the scales of the VH, MC, EC, and BR are different, the values are normalized using the minimum-maximum scaling method. 19s listed in Table 1, the average correlation coefficients between these adjustment factors (VH, MC, and EC) and BR were calculated as 0.4115, −0.4059, and −0.5078,  1, we found that the adjustment of VH is positively related to eyestrain, whereas the adjustments of MC and EC are negatively related to eyestrain.Therefore, the increase in VH causes the increase in eyestrain, and the increase in MC and EC reduces eyestrain.The average gradient is the slope of the fitted line by linear regression, and it represents the rate of change of VH, MC, or EC according to that of BR.The linear regressions were also performed to analyze the change in eye response in relation to the change in the adjustment factors in Table 1.On the basis of the results (average gradient) of linear regression, it is observed that if the MC or EC increases, the eyestrain decreases.In contrast, if the VH increases, the eyestrain also increases.The R 2 values between the three adjustment factors and BR were calculated as 0.2310, 0.2095, and 0.3455, respectively.In Tables 1 and 2, and Fig. 8, R 2 refers to the degree of fitting when using the regression method. 20In general, greater values of R 2 represent a better fit. Figure 8 shows the examples of 2-D dot graphs of one subject, where one dot denotes the average BR and its corresponding adjustment factors (VH, MC, and EC).
Because the y-intercept points of the fitted lines (the point where the fitted line is intercepted with the y-axis) and the degrees of distributions of all data of the 24 subjects are different for each subject, it is difficult to obtain a meaningful result from the average of all subjects.Instead, we included both the average results and all the results of the 24 subjects in Tables 1 and 2, respectively.
Figure 9 shows the examples of gaze detection results.The circles represent the reference points at which each subject should look, and crosshairs show the gaze points that are calculated by our gaze detection algorithm (explained in Sec.2.2).A total of five subjects tried to look at the nine reference points five times, and each crosshair shows the average point of five trials per each subject.We measured the gaze detection error as the angle between the vector to the reference point and the vector to the calculated gaze position.The gaze detection error between the reference and gaze points was about 1.12 deg.As seen in Fig. 9, the reference points show differences from the calculated gaze points.In other words, the gaze error for each subject can occur randomly inside the circle whose radius is 1.12 deg, and we consider this circle in the case of generating the eye foveation model.Therefore, the eye foveation model without this gaze detection error, as shown in Fig. 5

Conclusion
This research introduced a new eyestrain measurement method considering an eye foveation model.On the basis  Optical Engineering 073104-7 July 2013/Vol.52 (7)  of this measurement, it was confirmed that a stable relationship exists between the eyestrain and the three adjustment factors-color information, edge, and motion information.
Experimental results showed that a greater degree of VH induced higher eyestrain.On the contrary, a greater degree of the EC and MC induced relatively lower eyestrain.With the recent developments in television technology, the smart TV, which includes a built-in camera, has become widespread.On the basis of the results of this research, an intelligent display can be expected that has the functionality of reducing the user's eyestrain by decreasing the VH or increasing the edge and motion information of a video based on the eye response measured by the built-in camera.
In future works, the relationship between eyestrain and video factors in various kinds of displays, such as 3-D stereoscopic or holographic displays, would be researched on the basis of gaze detection and the proposed foveation model.

Fig. 2
Fig. 2 Example of experimental setup of four near-infrared (NIR) illuminators attached to the corners of the liquid crystal display (LCD).

Fig. 3
Fig.3Example of four specular reflections and detection results.
S w (λ, θ) is the error sensitivity in subband (λ, θ); the method for calculating S w (λ; θ) is shown in Refs.14 and 16.For a given wavelet coefficient at position x ∈ B λ;θ [where B λ;θ is the set of wavelet coefficient positions existing in subband (λ, θ)], the distance from the foveation point in the spatial domain is shown in Refs.14 and 16:

Fig. 5
Fig. 5 Foveation-based contrast sensitivity mask in the wavelet domain.(a) Sensitivity mask not considering the gaze tracking error (Refs.14 and 16).(b) Sensitivity mask considering the gaze tracking error (proposed method).

Fig. 6
Fig. 6 Example of foveated image.(a) Original image.(b) Foveated image of (a) obtained by the proposed method considering the gaze detection error (user's foveation position is a white crosshair).

Fig. 7
Fig. 7 Examples of extracted features in a video image.(a) Original image.(b) Hue image.(c) Motion image.(d) Edge image.(e) Original gray image including the foveation point as a white crosshair.(f) Hue image after applying the conventional foveated model (Refs.14 and 16).(g) Motion image after applying the conventional foveated model (Refs.14 and 16).(h) Edge image after applying the conventional foveated model (Refs.14 and 16).(i) Hue image after applying the proposed foveated model.(j) Motion image after applying the proposed foveated model.(k) Edge image after applying the proposed foveated model.
(a), is different from the proposed eye foveation model, which considers the gaze detection error, as shown in Fig. 5(b).

Fig. 8
Fig. 8 Graph and linear regression results for one subject.(a) Relationship between blinking rate (BR) and variance of Hue (VH).(b) Relationship between BR and motion component (MC).(c) Relationship between BR and edge component (EC).

Fig. 9
Fig. 9 Examples of gaze detection results (the circles represent the reference points at which each user should look; the crosshairs show the positions that are calculated by our gaze detection algorithm).

Table 1
Relationship between three adjustment features and eye responses (average value of experimental data from 24 subjects).

Table 2
Experimental values from 24 subjects.Lee et al.: Minimizing eyestrain on a liquid crystal display considering gaze direction. . .
Downloaded From: https://www.spiedigitallibrary.org/journals/Optical-Engineering on 26 Nov 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use respectively.Based on the average correlation coefficient in Table