The worse eye revisited: Evaluating the impact of asymmetric peripheral vision loss on everyday function

In instances of asymmetric peripheral vision loss (e.g., glaucoma), binocular performance on simple psycho-physical tasks (e.g., static threshold perimetry) is well-predicted by the better seeing eye alone. This suggests that peripheral vision is largely ‘ better-eye limited ’ . In the present study, we examine whether this also holds true for real-world tasks, or whether even a degraded fellow eye contributes important information for tasks of daily living. Twelve normally-sighted adults performed an everyday visually-guided action ( ﬁ nding a mobile phone) in a virtual-reality domestic environment, while levels of peripheral vision loss were independently manipulated in each eye (gaze-contingent blur). The results showed that even when vision in the better eye was held constant, participants were signi ﬁ cantly slower to locate the target, and made signi ﬁ cantly more head-and eye-move-ments, as peripheral vision loss in the worse eye increased. A purely unilateral peripheral impairment increased response times by up to 25%, although the e ﬀ ect of bilateral vision loss was much greater (>200%). These ﬁ ndings indicate that even a degraded visual ﬁ eld still contributes important information for performing everyday visually-guided actions. This may have clinical implications for how patients with visual ﬁ eld loss are managed or prioritized, and for our understanding of how binocular information in the periphery is integrated.


Introduction
Many common eye-diseases, such as glaucoma, papilledema, or and optic neuritis, disproportionately affect peripheral vision. Often, the resultant vision loss is asymmetric, with one eye more badly affected than the other (Asfaw, Jones, Mönter, Smith, & Crabb, 2018). Since in everyday life we tend to view the world binocularly, the better eye may be able to 'compensate' to some degree for the poorer one. Previous data from psychophysical tasks suggest that this compensation is neartotal: with binocular perimetric performance almost perfectly predicted by the better eye alone (Nelson-Quigg, Cello, & Johnson, 2000;Wood, Collins, & Carkeet, 1992). This implies that peripheral vision is 'bettereye limited': a belief which can have important implications for how patients with asymmetric peripheral vision loss are managed. This belief is also implicit in common practices, such as the way in which data from monocular eye tests are combined to estimate binocular vision (Integrated Visual Fields) (Crabb & Viswanathan, 2005;Musch, Niziol, Gillespie, Lichter, & Janz, 2017). In general, however, psychophysical measures tend to be poor predictors of real-world performance on vision-related activities of daily living (Brabyn, Schneck, Haegerstrom-Portnoy, & Lott, 2001;Rahi, Cumberland, & Peckham, 2009;Rubin, Muñoz, Bandeen-Roche, & West, 2000). And it is unclear to what degree this previous findingthat peripheral vision is 'better eye limited' translates from synthetic, psychophysical tasks, to high level, real world judgments involving complex stimuli. In the present study we addressed this question empirically, by asking normally-sighted observers to perform a typical, everyday task (finding a mobile phone in a cluttered domestic scene), while levels of simulated peripheral vision loss were manipulated independently in each eye.

Background literature
Evidence for the hypothesis that peripheral vision is 'better-eye limited' comes primarily from psychophysical studies using static threshold perimetry: a common clinical test in which the eye and head are fixed, and detection thresholds are measured for small (~0.5 deg), transient (~200 msec) spots of light, as a function of retinal location. For example, Nelson-Quigg et al. (2000) asked glaucoma patients to perform static threshold perimetry three times: once binocularly, and once with each eye monocularly. They found that at any given location in the visual field, binocular detection thresholds were well predicted by the maximum of the two corresponding monocular thresholds, and https://doi.org/10.1016/j.visres.2019.10.012 that this simple 'best location' method was not significantly less accurate at predicting binocular performance than more complex models in which data from both eyes were summed together (e.g., linear or quadratically (Jones, 2016). It is possible that some limited binocular summation may have occurred at locations where the sensitivities of the two eyes were very closely matched (relevant analyses not reported). However, the results as a whole indicated that in instances of asymmetric visual field loss, peripheral vision is primarily a function of the better seeing eye alone. Wood et al. (1992) performed a similar experiment in healthy observers. They found that for foveal targets, binocular sensitivities were approximately √2 better than monocular sensitivities (quadratic summation), but that this 'binocular benefit' diminished as a function of eccentricity: becoming near-negligible by 15-30 degrees eccentricity. Older studies from as early as 1931 likewise observed that "there is no summation under conditions of peripheral retinal stimulation when the stimulated area is relatively small" (Graham, 1931).
In short, the psychophysical evidence is clear: when it comes to detecting small spots of low-contrast energy, peripheral vision is primarily limited by the better seeing eye alone, and this is true both in normally sighted people and in those with peripheral vision loss (glaucoma). Crucially, however, while highly constrained psychophysical paradigms such as static threshold perimetry are ideal for assessing functionand for detecting dysfunctionat the level of the retina, their findings may not generalize to real world tasks, or to higher-order visual judgments in general. Indeed, even simply increasing the size of a light-spot stimulus has been found to cause rates of binocular integration in the periphery to increase (Wood et al., 1992). Likewise, binocular integration has been found to increase when the stimulus is held constant but the perceptual judgment made more complex (e.g., grating orientation discrimination vs. grating detection; Zlatkova, Anderson, & Ennis, 2001). It is unknown at present whether the benefits of binocular peripheral vision continue to increase if we move away from synthetic stimuli altogether, and instead consider the sorts of everyday perceptual judgments that patients report difficulties with most often, such as "finding something on a crowded shelf", or "noticing an object off to the side" (Mangione et al., 2001).
To date, the primary source of evidence regarding everyday perceptual judgments are patient self-reports. Their findings, however, are inconclusive. For example, if peripheral vision is better-eye limited, then scores on vision-related quality of life [VRQoL] questionnaires should be independent of visual field loss severity in the worse eye. However, while visual field loss in the better eye tends to be more strongly correlated with VRQoL (Hirneiss, 2014;Hyman, Komaroff, Heijl, Bengtsson, & Leske, 2005;Janz et al., 2001;Kulkarni, Mayer, Lorenzana, Myers, & Spaeth, 2012;McKean-Cowdin, Varma, Wu, Hays, & Azen, 2007;McKean-Cowdin, Wang, Wu, Azen, & Varma, 2008;Okamoto et al., 2015;van Gestel et al., 2010;Peters, Heijl, Brenner, & Bengtsson, 2015;Sawada, Yoshino, & Fukuchi, 2014;Sumi, Shirato, Matsumoto, & Araie, 2003;Sun, Rubin, Akpek, & Ramulu, 2017;Murata et al., 2013), visual field loss in the worse eye is also correlated with VRQoL (Hyman et al., 2005;Murata et al., 2013), and the difference in explained variance between the two eyes is typically small (i.e., ΔR 2 ≈ 0.1; van Gestel et al., 2010). Furthermore, some studies have failed to replicate even this small difference (Khadka et al., 2011;see also Haymes, LeBlanc, Nicolela, Chiasson, & Chauhan, 2008). Taken together, these results suggest that peripheral vision is not solely a function of the better seeing eye alone, and that the worse eye may also contribute important information. However, it is difficult to draw any firm conclusions from patient self-reports. These studies are not typically intended to examine subtle variations in binocular summation, which may be masked by the intrinsic measurement error of patient self-reports (Jones, Garway-Heath, Azuara-Blanco, & Crabb, 2018). Furthermore, vision loss in the better and worse eye is often correlated (Arora et al., 2013;Choi, Jung, & Jee, 2018). Finally, correlations with VRQoL alone also provide only limited insights regarding effect size: how much harder is it, for example, to "find something on a crowded shelf" as vision in the worse eye varies?

Present study
To quantitatively assess the 'real world' importance of a worse eye, the present study measured people's ability to perform a common, everyday visually-guided action (locating a mobile phone in a domestic household scene), while systematically manipulating the level of peripheral vision loss in each eye independently. Instead of examining real patients, gaze-contingent impairments of varying magnitude were digitally simulated in normally-sighted observers. The use of simulations allowed the size, shape and severity of the impairment to be controlled and manipulated precisely in each eye independently. It also meant that each observer could experience every combination of impairments (fully within-subjects design): enabling us to derive a 'pure' measure of how vision loss affects performance, independent of individual differences in age, motivation, cognitive function, or overall health. Contrary to the belief that peripheral vision is 'better-eye-limited', we hypothesized that performance on our real-world task would diminish (i.e., response times would increase) as peripheral vision loss in the worse eye increased. We also analyzed eye-and head-movements to examine whether degrading peripheral vision in one or both eyes caused systematic changes in search behaviors.

Task overview
Participants performed a virtual reality visual search task in which they attempted to locate a known target (a mobile phone) in various domestic environments. Levels of peripheral vision loss (blur) were independently manipulated in each eye, trial-by-trial. The question was whether performance (response time, total length of head-and eyemovements) declined as peripheral loss in the worse eye increased.

Participants
Participants were twelve healthy adults (20-35 years, M = 26.2, SD = 5.03), with normal vision. Normal vision was defined as monocular letter acuity ≤0.3 logMAR, and no self-reported visual impairments. Written informed consent was obtained prior to testing. The study was approved by host institution's ethics committee (UCL Psychology #11495/001) and was conducted in accordance with the Declaration of Helsinki. Participants received £20 compensation for their time.

Hardware
Stimuli were displayed on a FOVE0 Eye-Tracking VR headset (FOVE Inc., San Mateo, CA, United States). This contains a 2560 × 1440 WQHD OLED panel (1280 × 1440 pixels per eye), with a refresh rate of 70 Hz and a binocular field of view of approximately 100 degrees. The headset contained two integrated near-infrared eye-trackers (one per eye), which independently monitored gaze in each eye, with a singleframe spatial precision of approximately 1 deg, and a refresh rate of 120 Hz. The headset also contained inertial sensors (gyroscope, accelerometer) for monitoring head-pose. There was no crosstalk (Baker, Kaestner, & Gouws, 2016) between the two eyes, as all stimuliand simulated impairmentswere presented dichoptically. The software was controlled by a HP OMEN laptop (Hewlett-Packard Company, Palo Alto, CA, United States) containing a NVIDIA GTX 1050Ti graphics card (NVIDIA Corp, Santa Clara, CA, United States).

Stimuli
The search target was always a black smartphone (Fig. 1A, yellow box). The search environments consisted of 15 household rooms (bedrooms, bathrooms, kitchen, etc.), configured into a complete 'suburban' house (see Fig. 1A for examples). Depending on the observer's location, it was often possible to see into other rooms, connecting hallways, and the outdoor environ (garden, porch, neighboring houses, etc.). The whole scene was rendered using Unity3D v5.5.2 (Unity Technologies ApS, San Francisco, CA, United States), and displayed stereoscopically.

Simulating vision loss
As shown in Fig. 1C, the simulated vision loss consisted of a gazecontingent 'tunnel' of peripheral blur. The retinal location and spatial extent of the blur did not vary, but its magnitude varied trial-by-trial depending on the test condition (see Section 2.5). A central circular region ( ± 9°in radius, corresponding to the approximate extent of the Macula Lutea), was always spared, meaning that central vision was never impaired.
The location of the impairment on the screen was updated in nearreal-time based on the participants current gaze location (gaze-contingent presentation), and so the impairment remained near-static on the observer's retinae. To make this possible, a rapid blurring algorithm was implemented, which allowed the impairment to be updated well within the screen's refresh rate of 70 Hz without any loss of frames (see below). Inevitably, however, there was a small amount of lag before any changes in gaze could be registered. The lag from the hardware was on the order of~20 msec, and was composed primarily of the Eye Camera exposure time (8 msec), the eye-tracker transmission time (8 msec), and the eye-tracker processing time (4 msec). If we further factor in the refresh rate of the screen (70 Hz) and 3D rendering time, the total expected lag was approximately 30-40 msec. To minimize any effects of eye-tracker calibration drift (i.e., which would cause the location of the simulated field loss to shift over time), the eye-tracker was regularly recalibrated throughout the experiment, as detailed below (2.7. Procedure).
Blurring was performed in near-real-time using a custom OpenGL fragment shader, which we have made freely available online as part of a general-purpose 'sight loss simulator' toolbox Jones, Somoskeöy, Chow-Wing-Bom, Crabb (2020). In short, prior to each screen refresh, a 'pyramid' of progressively more blurred images was generated by a repeated process of decimation (box-filtering, and downsampling the source image by a power of two). When drawing the image to the screen, pixels were sampled either from the original source image (regions of no blur), and/or were upsampled from this pyramid of progressively more decimated images (regions of blur), using trilinear texture filtering to interpolate between pyramid levels as required. This process is generally referred to as mipmapping, and has been detailed previously in the context of simulating visual field loss by Geisler and Perry (2002). For further technical specifics on the present implementation, see also Jones and Ometto (2018). The key advantage of this method is its computational efficiency, allowing the screen-location of the gaze-contingent blur to be updated with minimal delay (on every screen refresh).
The type of blur created by this process is qualitatively similar to a gaussian low-pass filter and would not, for example, have completely removed all higher frequency information. Note also, that this approach is intended only as a model of retinal loss (e.g., glaucoma), and was applied as a 'post-processing' effect to the final rasterized image. If attempting to simulate vision loss due to optical defocus, it would also be important to incorporate phase-reversals (Murray & Bex, 2010;O'Hare & Hibbard, 2013), and to take into account the distance of each object in the visual scene. The present approach also assumes that observers had negligible refractive error in their periphery, which might otherwise mask the effects of the blur (Anderson, McDowell, & Ennis, 2001). This was thought reasonable for our cohort of young, normally-sighted adults. However, even young adults with no foveal refractive error can display large degrees of peripheral astigmatism, with substantial variability between observers (Gustafsson, Terenius, Buchheister, & Unsbo, 2001). This assumption may therefore have introduced a degree of noise (or bias) into the present results: error which could be corrected for in future by adjusting analyses to take into account the unique optical characteristics of each observer. The FOVE0 head-mounted display, containing an independent screen for each eye, and near-infrared eye-tracking. (C) The five simulated impairment levels (Level 0 = no blur). The Macula was always spared, by constraining the simulated impairment such that no blur was ever applied to a circular region of radius ± 9°(white arrow), centered on the current gaze location (red crosshairs: shown here for illustration only). Note that the observer's gaze was unconstrained (free viewing), and was tracked in near-real-time using the headset's near-infrared sensor. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Finally, note that this filter was intended only as a first approximation of real eye disease. Blur (low-pass filtering) provided a convenient way to parametrically manipulate the level of vision loss in each eye, and is grossly concordant with the self-reports of glaucoma patients with moderate or advanced field loss: who often describe their vision loss in terms of regions of 'blurry' vision (Crabb, Smith, Glen, Burton, & Garway-Heath, 2013). Blur also does not provide a comprehensive simulation of real glaucoma, however. Visual impairments are highly heterogeneous, and often involve other symptoms, including metamorphopsia, a loss of lower frequency contrast, and regions of the visual field becoming jumbled, missing, or elided (Hu et al., 2014). Likewise, note that the shape of the visual impairment (an extreme 'tunnel vision' effect) meant that all regions of peripheral vision were degraded. This is not representative of real glaucomatous visual field loss, which is often irregular and includes regions of spared vision. In future, it may be instructive to explore how covarying the shape of the visual field loss also affects performance on real world tasks. However, this was outside of the scope of the present work.

Test conditions
The shape and location of the simulated vision loss was constant. The only free parameter was the magnitude of blur, which on each trial took one of five levels 〈0,1,…,4〉. These levels corresponded to a nominal source image widths of 1280 pixels (level 0no blur), 640 (level 1), 380 (level 2), 240 (level 3), 20 (level 4). To put these values in context, level 4 was sufficiently great that, had it been applied uniformly across the whole visual field of both eyes, the task would be impossible (see Supplemental Material D). The level of blur was independently manipulated in each eye, giving a total of 25 (5 × 5) test conditions (see Fig. 2A). Each of these 25 test conditions was presented 10 times in random order, for a total of 250 trials.

Procedure
Participants were instructed to "find the phone as quickly as possible". On each trial, one of fifteen rooms was randomly selected, and the target was randomly placed at one of twenty locations within the room: locations predefined separately for each room. The location and starting orientation of the participant was also randomized, constrained so that the participant was never directly facing the target at trial onset.
Throughout the trial, gaze and head-pose were tracked continuously, using the headset's near-infrared and gyroscopic sensors, respectively. Participants indicated when they had located the target by pressing a button on a response pad. To avoid errant data from misclicks, a response was confirmed as correct only if the participant's gaze fell within 45°of the target at the time when they pressed the response button. Participants were also monitored by the experimenter throughout via an external computer screen, to ensure they were performing the task correctly. For safety, participants were seated on a rotating office chair, but were free to rotate their head, body, and eyes when searching for the target.
The trial ended either when the participant indicated they found the target (by pressing the response button), or after a maximum of 45 s had elapsed. The 45-seconds time limit was intended to keep participants motivated throughout testing, and resulted in a total of 104 trials being aborted across all participants (~3%). Data from aborted trials are not reported.
Each participant completed 250 test trials: 10 trials for each of the 25 test conditions (see Section 2.6). Participants were encouraged to take a short break every 25 trials (eyes closed with the headset on), and mandatory breaks were given after each 75 trials, during which participants removed the headset. The total testing time, including breaks, was approximately 90 min.
Before the start of the experiment, and after every break (i.e., a maximum of 75 trials), the eye-tracker was calibrated using the manufacturer-supplied procedure. All calibrations were validated, both by the software's own internal algorithms, and by an informal process of inspection in which the experimenter manually manipulated the location of a target (a red dot) on the screen, and observed the participant's estimated gaze location. If the headset reported poor calibration, or if the experimenter was not completely satisfied with its accuracy, the calibration was re-run. This happened on~1% of occasions, generally if the participant physically adjusted the position/straps of the headmounted display during the calibration procedure. During testing, estimated gaze was also visualized on a separate screen, overlaid onto the visual scene. The experimenter monitored this screen for any unusual gaze behavior, and could manually trigger a recalibration. In practice, however, no interventions were required.
Before testing participants completed a practice block of 10 trials, designed to familiarize them with the target, the task, and the various impairment levels. All participants completed these trials without difficulty (minimum 9 out of 10 responses correct and within the time limit).

Statistical analysis
The primary question was whether performance varied as vision loss in the worse eye increased (i.e., after adjusting for individual variability, and for the level of vision in the better eye). To test this statistically, data were entered into a Linear Mixed-Effects [LME] model, specified, in Wilkinson (LME) notation (Wilkinson & Rogers, 1973), as: where WORSEEYE was the level of visual impairment (blur) in the worse eye (0-100%), BETTEREYE was the level of visual impairment in the better eye (0-100%), and PARTID was the participant ID (1-12). The dependent variable, y, was computed for each trial, and variously took the form: (i) log 10 Response Time, in seconds; (ii) log 10 Total Scan-path Length, in degrees visual angle; (iii) log 10 Total Head-turn Length, in degrees; and (iv) answer correct (0 or 1). A significant main effect of WORSEEYE would mean that a given outcome measure, y, varied as peripheral blur in the worse eye increased. In practice, the LME model in Eq. (1a) was fitted by the MATLAB function "fitlme" (maximum likelihood method), and the significance of WORSEEYE predictor variable was formally evaluated using Simulated Likelihood Ratio Tests (Stram & Lee, 1994). Note also that this same model can also be specified in standard mathematical notation as: where β 0 is the mean intercept, β 1 and β 2 are the predictor variables (Worse/Better Eye), and b 0 is a random intercept variable which was allowed to vary across the m participants. For all figures and descriptive statistics, data are reported in linear units, using non-parametric statistics (e.g., medians), and 95% confidence intervals were computed using bootstrapping (N = 20,000; bias-corrected and accelerated method). Fig. 3 shows how response time varied as the magnitude of peripheral loss in each eye was manipulated independently. For any given magnitude of loss, performance was degraded more when the vision loss was bilateral symmetric (i.e., the positive diagonal of Fig. 3A), than when it was applied to one eye only (the bottom row and leftmost column of Fig. 3A). For example, when the impairment was maximal in both eyes (top right point of Fig. 3A), grand-median search times across all participant increased by over 200% (4.9 to 16.0 s; Wilcoxon Signed-Rank test; P < 0.001).

Response time
Varying vision loss in the worse eye only (and holding the better eye constant) had a smaller, but still measurable effect: causing median response times to increase by up to 25% (4.9 vs 6.2 secs; (Fig. 3B). The significance of this effect was confirmed by fitting the LME model in Eq (1a), and examining the effect of WORSEEYE (t = 2.77, P = 0.006). Taken together, these results indicate that performance is partially determined by the amount of peripheral vision loss in the worse eye (hypothesis H 2 in Fig. 2). The fact that the worse eye had some effect, but only partially determined performance, can also be seen intuitively by looking left/ right along the bottom row of Fig. 3A, and comparing the pattern of results to the three hypotheses in Fig. 2B.
Using simple linear regression, it was observed that variations in peripheral vision loss in the worse eye explained~2% of the variability in response times (F = 33.67, P ≪ 0.001, R 2 = 0.017), versus~7% for the better eye (F = 139.9, P ≪ 0.001, R 2 = 0.067). This confirms that the better eye is the single greatest predictor of performance, though it is worth noting that vision-loss alone left the majority of performancevariability unexplained (see also Supplemental Material A for further analysis).
By inspection of Fig. 3B, it can be seen that there was a possible interaction between the two eyes: increasing vision loss in the worse eye affected search times most when vision loss in the better eye was relatively small. To explore this further, post-hoc tests were performed in which the LME model (Eq (1a)) was fitted independently to the data in each of the 4 panels of Fig. 3B. The main effect of WORSEEYE was significant (both P < 0.05) in the left two panels (when vision loss in the better eye was minimal), but was not significant (both P > 0.05) in the right two panels (when vision loss in the better eye was moderate or severe). This indicates that vision loss in the worse eye affected performance only when the better eye was relatively healthy.

Eye-and head-movements
The amount that participants moved their eyes (Fig. 4) and head (Fig. 5) when searching for the target exhibited the same pattern of results as the response time data (Fig. 2B). Thus, participants made more searching movements as vision loss in the worse eye increased, and this effect was statistically significant for both eye-movements (t = 2.1, P = 0.016) and head-movements (t = 2.8, P = 0.005). Again, there was also an interaction between the two eyes, with post-hoc tests indicated that the effect of WORSEEYE was significant only when vision loss in the better eye was minimal (the first two panels of Fig. 4A/5A; all P < 0.05). Overall, these results provide convergent evidence that peripheral loss in the worse eye substantively affects task performance, particularly when the fellow (better) eye is relatively healthy.

Response accuracy
Response accuracy (percent correct responses) did not change across any of the visual impairment conditions, and was close to 100% throughout (M = 96%; see Supplemental Material B). Thus, there was no effect of WORSEEYE when the mixed-effects analysis was run (t = −0.07, P = 0.944), and a one-way ANOVA found no significant difference in percent-correct responses between any of the 25 impairment conditions (F = 0.04, P = 0.340). This indicates that while peripheral vision loss caused participants to be slower in locating the target, participants were no more likely to mistake the target for another object when experiencing simulated peripheral loss. This is to be expected, given that central vision was always spared in both eyes.

Discussion
The results showed that participants were slower to find an everyday object in a cluttered domestic sceneand made more head and eye movements when searchingas simulated peripheral vision loss in the worse eye increased: even when vision in the better eye remained constant. This indicates that for everyday visually-guided tasks, peripheral vision is not 'better-eye limited', and that the worse eye also provides important information for daily living. The benefit of the worse eye was greatest when vision in the better eye was relatively healthy, suggesting that the preservation of fellow-eye vision may be most important in early-to-moderate cases of field loss.
Substantial trial-by-trial variability in performance was apparent, as indicated by the large error bars in Figs. 3-5, and by the relatively small amount of response variability explained by sight loss alone. This is to be expected given the relatively uncontrolled task: No concerted attempt was made to match search-environments/target-locations for difficulty, and it is highly likely that the phone was objectively easier to locate on some trials (e.g., because some rooms contained fewer likely locations, or because the visual dissimilarity of target vs background was greater). The fact that there was a clear and consistent overall pattern to the data, despite this lack of stimulus/experimental control, we take as particularly good evidence that the reported effect is genuine, and has substantive real-world implications. Notably, it is possible to contrive stimuli for which the present effects are greater and more consistent than those observed here. For example, we report in Supplemental Material C a variant of the present task in which the target and environments were random textures, and where the effect of degrading the worse eye was much greater. Such task-variants could be of interest for people looking to adapt the present paradigm to detect or quantify visual impairment.

Comparison with previous literature
The present data stand in contrast to previous findings using more basic psychophysical tasks (static threshold perimetry), on which binocular sensitivity in the periphery is largely predicted by the better eye alone (Nelson-Quigg et al., 2000;Wood et al., 1992). This mismatch highlights that basic psychophysical tasks do not always provide a perfect model of an individual's ability to perform 'real world' perceptual judgments: a fact which has also been widely reported previously, particularly in the context of visual acuity (Brabyn, Schneck, Haegerstrom-Portnoy, & Lott, 2001;Rahi, Cumberland, & Peckham, 2009;Rubin, Muñoz, Bandeen-Roche, & West, 2000) and visual field loss (Abe et al., 2016;Chun et al., 2019;Cheng et al., 2015). The present findings are, however, broadly consistent with qualitative clinical data. For example, several studies have reported reductions in visual disability and symptoms following second-eye cataract surgery, despite often minimal changes in acuity (Elliott, Patla, & Bullimore, 1997;Laidlaw & Harrad, 1993). One corollary of this is that we may in future need to move away from purely 'synthetic' stimuli, such as lightspots, gratings, or isolated optotypes, if we wish to fully characterize the functional impact of sight loss.
The fact that eye-movements increased with increasing peripheral loss is consistent with a number of previous studies examining the natural eye-movements of glaucoma patients (Asfaw et al., 2018;Smith, Glen, & Crabb, 2012;Lee, Black, & Wood, 2017;Dive et al., 2016). To our knowledge, only one study by Dive et al. (2016) has examined head movements in glaucoma patients. They likewise reported elevated levels when performing 'real world' tasks, although no quantitative data were reported. The present study confirms these observations by providing direct, simultaneous measurements of head-and eye-movement under conditions of simulated peripheral vision loss. It is interesting to note that in the present study, the observed changes in head-movements were at least as great as the changes in eye-movements. This suggests that head-movements might provide a possible biomarker for i.e., when the left or right eye was the worse eye). For example, the black square in the first panel is the average response time when peripheral blur in the better eye was Level 0 (no blur), and peripheral blur in the worse eye was Level 2 (moderate blur in either the left or right eye). As illustrated previously in see Fig. 2B, if the worse eye had no effect on performance then all of the bars within a given panel should fall along the horizontal pink line. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 4. Median scanpath length (amount of eye-movements) on each trial, in degrees. Same format as Fig. 3. H. Chow-Wing-Bom, et al. Vision Research 169 (2020) 49-57 the detection of eye-diseaseas has been suggested previously for eyemovements (Asfaw et al., 2018). Notably, to make measurements would require us to move away, however, from conventional visual assessments, in which the eye and head are constrained by fixation targets and chinrests.

Limitations
The present study employed simulated impairments, rather than real patients. This was necessary, as it allowed us to systematically manipulate the impairment and control for individual differences. It does, however, mean that we have to interpret the results with caution. To the extent that real eye-disease may cause not just high frequency loss (blur), but also a range of other disturbances (low frequency loss, spatial distortions, chromatic anomalies, crowding, infilling, etc.), the worse eye may play an even greater or smaller role than was observed here. Notably, the simulator used in the present study is also capable of incorporating many of these other effects, and these effects can be linked to empirical data from real patients (Jones & Ometto, 2018). It would therefore be possible to conduct a more comprehensive assessment in future of how different forms of vision loss affect everyday visually-guided actions, or a detailed analysis of how a particular individual's visual profile affects daily living. These would be non-trivial undertakings, but the code for the present simulations has been made freely available online for anybody interested in pursuing this line of inquiry Jones et al. (2020). It likewise remains an open question whether performance would change over time as the individual learns to adapt to their impairmenta consideration which may be particularly relevant to diseases such as glaucoma, where sight loss is often gradual and progressive. The question of adaptation could be explored in future empiricallyfor example through the use of Augmented Reality simulations, which in principle can be worn for days or weeks at a time.
The present data demonstrated that peripheral sight loss affects binocular performance on everyday visual-guided tasks, even when the loss is unilateral. However, they do not tell us how or by what means the worse eye contributes task-relevant information. One possibility is that the change in performance was due primarily to an overall loss of (contrast) sensitivity. Normal vision is comprised of a central binocular field, and two uniocular flankers (Henson, 1993). By adding blur to the periphery of one eye, the binocular field is shrunken/decreased, limiting opportunities for binocular summation, while one of the flanking regions is lost altogether, effectively narrowing the total field of view. The result is a narrower, shorter 'hill of vision'. In addition to overall changes in sensitivity, however, disrupting binocular vision through the addition of monocular blur can also have secondary consequences, including aberrant motion processing (Burge, Rodriguez-Lopez, & Dorronsoro, 2019) and a loss of (high-frequency) stereopsis (Li et al., 2016). The latter may be particularly significant for the present task (Visual Search), since stereopsis is known to be important for 'breaking camouflage' (Yellott, 1971), while the former may have similar consequences by removing another important depth cue: motion parallax. From the present study, it is not possible to determine which, if any, of these factors are important for explaining the pattern of results observed. In future, however, such questions could be explored experimentally by modifying the present paradigm. For example, instead of applying blur one could selectively remove binocular disparity from the periphery of both eyes. Alternatively, participants could view a 2D plane instead of a 3D environment, to further remove motion parallax cues. To the extent that the same pattern of results continued to hold, it would indicate that it is these secondary depth cues, rather than a loss of sensitivity, that are primarily responsible for changes in performance observed.
A further limitation of the present study is that the central macula region of ± 9°was always spared. We are therefore unable to infer what the effect of unilateral loss would be if this 'healthy' region were reduced. The benefits of binocular summation in central vision are, however, well-established, with previous studies showing that observers are better at detecting faint objects (Blake & Fox, 1973), resolving fine spatial detail (Heravian, Jenkins, & Douthwaite, 1990;Horowitz, 1949), or performing delicate visuomotor actions (Read, Begum, McDonald, & Trowbridge, 2013), when fixating with two foveae versus just one. We therefore predict that the preservation of the fellow-eye would be at least as beneficial in instances of central or paracentral vision loss. Consistent with this, the same qualitative pattern of results as reported here was observed in a small cohort of observers when the blur was applied uniformly across the whole visual field (see Supplemental Material D).

Implications & future work
The present study indicates that there may be a real-world cost to unilateral peripheral vision loss that is not captured by traditional psychophysical measures, such as static threshold perimetry. Such deficits could have particular implications for time-critical tasks, such as driving. Thus, if we consider the conditions involving a purely unilateral impairment (the leftmost panel of Figs. 3-5B): such individuals would be expected to score near-perfectly on a standard binocular visual field assessment (Graham, 1931;Nelson-Quigg et al., 2000;Wood et al., 1992). They would therefore be considered legally fit to drive in most countries, even professionally (Kotecha, Spratt, & Viswanathan, 2008). In contrast, such individuals were 1 s (25%) slower, on average, to locate the target object (and on some trials much slower). To put this in context, when driving at~30 mph (~50 km/h), an additional 1 s delay in braking is enough to increase stopping distance by 40% (American Association of State Highway and Transportation Officials, H. Chow-Wing-Bom, et al. Vision Research 169 (2020) 49-57 2011), and double the likelihood of severely injuring a pedestrian three car-lengths in front (Tefft, 2013). This is particularly concerning given that individuals with unilateral glaucoma are no more likely than their normally-sighted peers to cease driving (Ramulu, West, Munoz, Jampel, & Friedman, 2009). At present, we can of course only speculate precisely how the present results would translate to other real-world scenarios, such as driving. It may be, for example, that some drivers can compensate for their vision loss through increased vigilance (Kübler et al., 2015; though see Prado Vega, van Leeuwen, Rendón Vélez, Lemij, and de Winter, 2013). Notably, the technologies developed for the present work are compatible with all modern software and hardware devices. It would therefore be possible to apply the same simulated impairments to existing driving simulators, and to observe empirically their effects on performance. More generally, the present work highlights the importance of measuring not only response accuracy, but also response speed and effort, when characterizing a visual impairment. Thus, compared to healthy vision, even an intense, bilateral simulated impairment caused no significant change in response accuracy, which was close to 100% throughout. It did, however, cause response times to be significantly slower (by over 200%, in the bilateral-symmetric case), and compelled participants to make substantially more head-and eye-movements. These findings echo a recent report by Barsingerhorn, Boonstra, and Goossens (2018), who observed that children with ocular dysfunction were slower at performing a simple spatial judgment (landolt-C orientation-identification) than their normal-sighted peers, even after the stimuli were matched in size for relative acuity. Taken together, such findings suggest that when characterizing vision loss, it may be prudent to move beyond simple functional measures of accuracy. It may, for example, be desirable to consider a treatment effective if it makes visual judgments faster or less tiring, even if there is no substantive change in the size or contrast of the smallest identifiable object. It is interesting to note that this is already an established principle in the auditory community, where fatigue, effort and stress are considered when evaluating hearing impairment (Hicks & Tharpe, 2002). Such constructs are difficult to quantify using traditional 'pen and paper' vision tests, but can be more easily probed using the sorts of digital technologies reported here.
Finally then, the present study highlights the potential utility of Virtual-and Augmented-Reality simulations for assessing the real-world impact of visual impairments. As discussed previously, traditional measures of visual function, such as acuity and visual field loss, typically explain only a minority (10-30%; Abe et al., 2016;Chun et al., 2019;Cheng et al., 2015) of the variability in self-reported vision related quality of life. In contrast, functional evaluations in real-life scenarios such as driving are difficult to obtain, and sometimes even dangerous. VR technologies such as those presented here may provide a novel platform with which to observe directly a person's ability to perform key everyday tasks, and to do so in a way that is controlled, quantifiable, replicable, and safe. Notably, however, substantial hardware development is still required before VR technology will be suitable for most patients. This includes the development of lighter, more comfortable headsets, and the ability to integrate appropriate refractive correction across a wide range of prescriptions.

Summary and conclusions
1. Varying degrees of simulated peripheral vision loss (blur) were applied to one or both eyes of twelve normally-sighted adults. 2. Participants were slower to find an everyday object in a cluttered domestic sceneand made more head-and eye-movements when searchingas peripheral vision loss in the worse eye increased. This suggests that peripheral vision is not entirely 'better-eye limited', and that even the worse eye contributes important information for performing activities of daily living. 3. More generally, the data suggest that simple synthetic tasks may not always be sufficient to fully-characterize visual impairments, and that VR technologies might in future provide a productive tool with which to observe and quantify the everyday impact of vision loss.

Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.