Open Access
Article  |   March 2021
Look where you go: Characterizing eye movements toward optic flow
Author Affiliations
  • Hiu Mei Chow
    Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
    doris.hm.chow@ubc.ca
  • Jonas Knöll
    Institute of Animal Welfare and Animal Husbandry, Friedrich-Loeffler-Institut, Celle, Germany
    jonas.knoell@googlemail.com
  • Matthew Madsen
    Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
    madsenmat@gmail.com
  • Miriam Spering
    Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
    Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
    Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, British Columbia, Canada
    miriam.spering@ubc.ca
Journal of Vision March 2021, Vol.21, 19. doi:https://doi.org/10.1167/jov.21.3.19
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hiu Mei Chow, Jonas Knöll, Matthew Madsen, Miriam Spering; Look where you go: Characterizing eye movements toward optic flow. Journal of Vision 2021;21(3):19. https://doi.org/10.1167/jov.21.3.19.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When we move through our environment, objects in the visual scene create optic flow patterns on the retina. Even though optic flow is ubiquitous in everyday life, it is not well understood how our eyes naturally respond to it. In small groups of human and non-human primates, optic flow triggers intuitive, uninstructed eye movements to the focus of expansion of the pattern (Knöll, Pillow, & Huk, 2018). Here, we investigate whether such intuitive oculomotor responses to optic flow are generalizable to a larger group of human observers and how eye movements are affected by motion signal strength and task instructions. Observers (N = 43) viewed expanding or contracting optic flow constructed by a cloud of moving dots radiating from or converging toward a focus of expansion that could randomly shift. Results show that 84% of observers tracked the focus of expansion with their eyes without being explicitly instructed to track. Intuitive tracking was tuned to motion signal strength: Saccades landed closer to the focus of expansion, and smooth tracking was more accurate when dot contrast, motion coherence, and translational speed were high. Under explicit tracking instruction, the eyes aligned with the focus of expansion more closely than without instruction. Our results highlight the sensitivity of intuitive eye movements as indicators of visual motion processing in dynamic contexts.

Introduction
Many daily functions, from catching prey to walking or driving to work, involve moving through our dynamic visual environment. To adequately control self-motion, we need to know not only whether we are moving but also where and how fast. These aspects of self-motion are informed by multiple sensory cues, one of which is optic flow, the visual motion pattern projected onto our retina when we move through the visual environment (Gibson, 1950; Lappe, Bremmer, & van den Berg, 1999; Vaina, Beardsley, & Rushton, 2004). Optic flow expands or radiates outward when we move forward, and it contracts or radiates inward when we move backward. The singular point of convergence or radiation is termed the focus of expansion (FOE) and often indicates heading direction. 
In humans and other animals, optic flow is critically important in guiding locomotion (Gibson, 1958; Warren, Kay, Zosh, Duchon, & Sahuc, 2001), posture maintenance (e.g., Bardy, Warren, & Kay, 1996; Bardy, Warren, & Kay, 1999), and navigation such as steering (e.g., Li & Niehorster, 2014). Other animals also adjust their locomotive or flight behavior based on optic flow, such as insects (Srinivasan, Zhang, Altwein, & Tautz, 2000) and birds (Bhagavatula, Claudianos, Ibbotson, & Srinivasan, 2011; Dakin, Fellows, & Altshuler, 2016). Accordingly, humans and macaque monkeys are able to reliably discriminate FOE position changes as small as 1° of visual angle (Britten & Van Wezel, 2002; Warren & Hannon, 1988). The ability to perceive and discriminate the FOE position depends on where observers look (Crowell & Banks, 1993; Gu, Fetsch, Adeyemo, DeAngelis, & Angelaki, 2010; Warren & Kurtz, 1992). These studies tested heading direction discrimination thresholds while observers fixated at different eccentricities relative to the FOE. They found that performance scaled as a function of proximity to the fovea, with the best performance occurring when the FOE was near the fovea. 
Viewing an optic flow stimulus with a stationary and eccentric FOE generates eye movements toward the FOE, such as in macaque monkeys (Angelaki & Hess, 2005; Lappe, Pekel, & Hoffmann, 1998; Lappe, Pekel, & Hoffmann, 1999) and humans (Niemann, Lappe, Büscher, & Hoffmann, 1999). These studies showed that observers make a combination of saccades directed toward the FOE, as well as slow-tracking reflexive eye movements following retinal motion near the fovea. Knöll, Pillow, and Huk (2018) found that different primate species—one human, two macaques, and one marmoset monkey—intuitively tracked a dynamically changing FOE with their eyes despite minimal training or instruction. In conjunction with evidence showing that eye movements can be more sensitive than motion perception (Spering & Carrasco, 2015; Tavassoli & Ringach, 2010), these recent studies emphasize the value of eye movements as sensitive indicators of the processing of visual motion features. 
Notwithstanding the known sensitivity of eye movements and the finding that the eyes naturally move to the FOE across species (Knöll et al., 2018; Spering & Chow, 2018), we do not yet know how often or consistently observers lock their gaze onto the FOE or how accurate these intuitive eye movements are. A better characterization of eye movements in such a naturalistic context could lead to a better understanding of how eye movements might serve heading and locomotion. Moreover, intuitive eye movements are potentially a powerful tool for clinical testing and diagnosis, where patients can often not be explicitly instructed. Understanding how these eye movements respond to different tasks and visual constraints could be a stepping stone toward the development of eye-movement-based tools to investigate motion sensitivity. Here, we characterize eye movement responses toward an optic flow stimulus with an unpredictably moving FOE. 
First, we investigated whether FOE is tracked intuitively by a large sample of human observers with a simple instruction to free-view the stimulus. If so, this would confirm previous observations in a small sample (Knöll et al., 2018). We tested this by quantifying the occurrence and variability of intuitive FOE-tracking behavior through analysis of the overall alignment of the eye and FOE position changes, smooth tracking (optokinetic reflex or smooth pursuit), and fast tracking (saccades). 
Second, we investigated how eye movements change when observers receive explicit instruction to track the FOE versus when they are uninstructed. Whereas psychophysical testing in the laboratory usually involves explicit task instructions combined with feedback to promote task compliance, natural stimulus encounters do not involve explicit instruction. Moreover, the ability to understand and follow instructions depends on an observer's cognitive state. Here, we quantified the potential improvement of FOE tracking measures by explicit instruction. 
Third, we investigated how FOE tracking changed when we manipulated motion signal strength by altering stimulus features such as contrast, coherence, and speed. These manipulations mimic daily environments where we regularly experience visual motion in a range of signal strengths, such as when driving through fog or moving slowly in heavy traffic. Previous studies have demonstrated how these features affect visual motion processing in trained observers under explicit instruction. For example, observers’ accuracy in discriminating between optic flow direction increased with increasing stimulus coherence (∼25% coherence required for a brief stimulus of 100 ms) (Burr & Santoro, 2001). Increasing luminance contrast did not improve discrimination performance beyond a certain level—for example, 3% (Allen, Hutchinson, Ledgeway, & Gayle, 2010) or 15% (Edwards, Badcock, Nishida, 1996), suggesting early saturation of contrast in motion processing. When instructed to track the FOE with their eyes, observers showed improved spatial and temporal alignment when speed increased from 1 m/s to 4 m/s and from 4 m/s to 16 m/s (Cornelissen & van den Dobbelsteen, 1999). Analyzing the impact of motion signal strength on intuitive FOE tracking addresses whether intuitive eye movements reveal similar characteristics of motion processing previously established via instructed tasks. 
Methods
In two experiments, we characterized intuitive tracking performance. In Experiment 1, we asked observers to view a high-contrast, high-coherence optic flow stimulus under free-viewing. In Experiment 2, we manipulated stimulus signal strength (low vs. high coherence, contrast, and speed) and instruction (free viewing vs. explicit instruction to track the FOE). 
Observers
We tested 43 adults (mean age, 25.7 ± 5.0 years; 22 female); of these, 19 participated in Experiment 1 and 24 in Experiment 2. All observers provided data in the high-contrast, high-coherence condition, and each subset of observers provided additional data in our stimulus and instruction manipulations (n ≥ 19 for each comparison). These sample sizes are considerably larger than sample sizes used in similar studies in the literature—for example, n = 1 (Knöll et al., 2018) and n = 4 (Niemann et al., 1999). Due to problems with the eye-tracking setup, three observers did not complete all conditions after instruction, resulting in different n values when comparing the effect of motion signal strength (n = 22 for motion coherence; n = 21 for dot contrast; n = 22 for translational speed). All observers had normal or corrected-to-normal visual acuity (20/20), tested using a Snellen visual acuity chart. Observers received CAD 12 per hour as remuneration for their participation. The experiment protocol adhered to the tenets of the Declaration of Helsinki and was approved by the University of British Columbia Behavioral Research Ethics Board. Before participation, observers provided written informed consent. 
Visual display and apparatus
In both experiments, observers were seated 55 cm away from a 39 cm × 29 cm cathode-ray tube monitor (G255f, resolution 1280 × 1024 pixels, refresh rate 85 Hz; ViewSonic, Brea, CA) covering 39.0° × 29.5° of the visual field. Each observer's head was stabilized using a combined chin and forehead rest. Visual stimuli were generated using a PC with a GeForce GTX 970 graphics card (NVIDIA, Santa Clara, CA) and MATLAB R2018b (MathWorks, Natick, MA), Psychtoolbox 3.0.12 (Brainard, 1997; Kleiner, Brainard, & Pelli, 2007; Pelli, 1997), and PLDAPS toolbox version 4 (Eastman & Huk, 2012). 
Stimuli
Our stimulus was similar to that used by Knöll and colleagues (2018) and was comprised of a cloud of 295 dots at a density of 0.4 dots/deg2 covering 31.2° × 23.6° of the visual field (Figure 1). Each dot was 0.13° × 0.13° in size and lasted for an unlimited lifetime (Experiment 1) or a short lifetime of 4 frames (47 ms; Experiment 2). Additionally, dots were uniformly distributed in the desired virtual depth range (1–3 m) initially and were then redrawn in this range when the dots reached their lifetime or the depth limit of < 0.5 m or > 3.5 m. The background had a luminance of 50 cd/m2. In Experiment 1, half of the dots were 33% brighter, and half of the dots were 33% darker than the background (Michelson contrast). In Experiment 2, dot Michelson contrast was set to one of five contrast levels (3.6%, 8.1%, 18.1%, 40.3%, or 90%) when contrast was manipulated and to 90% contrast when either motion coherence or speed was manipulated. 
Figure 1.
 
An example stimulus (dots) superimposed with arrows to indicate the direction (arrow head) and velocity (arrow length) of local dot movement radiating from the FOE (white/black open circle).
Figure 1.
 
An example stimulus (dots) superimposed with arrows to indicate the direction (arrow head) and velocity (arrow length) of local dot movement radiating from the FOE (white/black open circle).
Each dot was categorized as a signal dot or a noise dot to achieve a desired global motion coherence level. In Experiment 1, the motion coherence level was 100%; in Experiment 2, motion coherence was set to one of five coherence levels (6.25%, 12.5%, 25%, 50%, or 100%) when coherence was manipulated and to 100% coherence when contrast or speed was manipulated. Noise dots moved randomly, whereas the signal dot location was updated based on its three-dimensional location, the FOE location, and the desired translational speed as elaborated below. To calculate dot velocity on the screen (\({\dot x^s}\)), we multiplied the distance between the dot and the FOE (\(x_{foe}^s - {x^s}\)) by a scaling factor, which was the ratio of the desired translational speed (\(\dot z\)) to dot location in depth (z), as shown in the following formula:  
\begin{equation*}{\dot x^s} = \left( {x_{foe}^s - {x^s}} \right)\; \times \left( {\dot z/z} \right)\end{equation*}
 
This formula means that local dot velocity is higher when the translational speed is higher, when the dot is located closer to the observer in virtual depth, and when the distance between the dot and the FOE is larger. In Experiment 1, translational speed was constant at 1.4 m/s; in Experiment 2, translational speed was set to one of five levels (0.5, 1, 1.4, 2.1, or 4.2 m/s) when speed was manipulated and to 1.4 m/s when coherence or contrast was manipulated. The in-depth motion direction of the dot cloud alternated randomly, resulting in expanding or contracting optic flow, with an average switch time of 3.6 seconds. 
The FOE location shifted continuously in the horizontal and vertical dimensions in a random-walk fashion. The movement shift was small, yielding a location variability of standard deviation (SD) = 2.7° within a given trial in Experiment 1, but it was larger in Experiment 2 (SD = 4.5°) to increase the demand of active tracking. The statistical dynamics of the FOE movement were kept constant across manipulations of motion signal strength listed above; that is, even when the dot speed was manipulated, the speed of the FOE they encoded did not alter between these trials. 
Procedure and design
In Experiment 1, observers completed one free-viewing block of 3 to 3.5 minutes accumulated viewing time with alternating optic flow direction. In Experiment 2, observers always first completed three blocks of free-viewing, one for each motion signal strength manipulation (contrast, coherence, and speed, in randomized order), followed by three blocks (one block each for contrast, coherence, and speed) in which they were asked to track the FOE with their eyes. Each block consisted of five trials of 30-second duration each. Random seeds were used between blocks and observers; the same seed was used within each block such that all trials within a block shared the same FOE position trajectory. 
In both experiments, stimulus presentation was triggered when the observer looked at the screen and paused if they looked near the edge or outside the screen or blinked for more than 250 ms. Observers did not receive feedback regarding tracking performance. To ensure that all observers had the same level of exposure to the optic flow stimulus before the experiment started, they first viewed an exposure trial (30 seconds) with an obvious optic flow stimulus (high motion coherence, high dot contrast, and medium speed) with free-viewing instruction. 
Eye movement preprocessing and analysis
Eye position data were recorded at 1000 Hz using a video-based eye tracker (EyeLink 1000; SR Research, Ltd., Kanata, ON, Canada) and processed offline using MATLAB. Eye position trajectory (downsampled to 850 Hz) was compared with the FOE position trajectory (upsampled to 850 Hz). Eye position data were filtered using a Butterworth filter (low-pass, second-order) with a cutoff frequency of 30 Hz; velocity data were derived by digital differentiation of filtered position data. Blinks and eye position data 50 ms before and after blinks were removed from all analyses (5.2% of position data across observers). 
Table 1 summarizes the measures used to describe FOE-tracking behavior. To assess the overall alignment between the FOE and eye position changes across time, we conducted cross-correlation analyses between each observer's eye position and FOE position across all samples obtained in a trial, where correlation coefficients were generated after introducing a variable time lag between the eye and FOE position. Based on results presented in Knöll et al. (2018), we limited the correlation analysis by the following constraints: The eye had to lag behind the FOE (not be ahead of it), and the maximum lag was 10 seconds. For each observer, we then used the highest yielded correlation coefficient as an indicator of overall alignment; higher correlation coefficient equals better alignment. To quantify temporal alignment, we used the time lag that yielded the highest correlation coefficient; the shorter the time lag, the faster the eyes caught up with FOE position changes. To quantify spatial alignment at zero lag, we used the average Euclidean distance between the eye and FOE position across samples; the smaller the position error, the more closely the eyes hovered around the FOE. 
Table 1.
 
A summary of analysis measures to describe the temporal and spatial accuracy of eye movements relative to the FOE.
Table 1.
 
A summary of analysis measures to describe the temporal and spatial accuracy of eye movements relative to the FOE.
The random FOE shifts triggered a combination of different types of eye movements, saccades, and smooth tracking. Saccadic eye movements were detected when the eye velocity (the digitally differentiated eye position) exceeded a fixed velocity criterion (30°/s) for five consecutive samples. Acceleration minima and maxima determined saccade onsets and offsets. In addition to saccade, amplitude, and direction, we quantified the relationship between the saccade and the FOE by position error as a measure of spatial accuracy. We defined saccade position error as the Euclidean distance between eye position at saccade offset and FOE position at the time of saccade offset; lower position error equals higher accuracy. The quality of smooth-tracking eye movements during intersaccade intervals was characterized by analyzing their peak and median eye velocity, as well as their dot velocity gain (ratio of eye velocity vs. local dot velocity near the fovea, assuming a virtual dot depth of 2 meters); higher velocity gain equals better quality. Short intersaccade intervals (<50 ms) or extremely long intervals (3 SD or more than the mean intersaccadic interval duration for each observer) were removed from the analysis (2.3% of the intersaccade intervals across observers). 
When observers did not track, random eye movements could still land onto the FOE by chance. To differentiate tracking from tracking that occurred by chance (tracking-at-chance), we computed baseline alignment and eye movement measures that indicated the expected values for tracking-at-chance. This analysis was achieved by comparing observers’ eye positions with secondary FOE positions that were not used to generate the flow pattern, despite sharing the same statistical dynamics as the FOE shown, thus indicating performance when tracking was at chance. From a total of 474 comparisons across 30 seconds each, this analysis yielded a distribution of baseline measures. 
Statistical analysis
For each analysis, we first tested whether the relationship between variables was different between Experiments 1 and 2; if no differences were observed, data were collapsed across experiments. We also assessed for differences between responses to horizontal and vertical stimulus dimensions and limited reporting results to the horizontal dimension when no differences were observed. To quantify the occurrence and variance of intuitive FOE tracking, we report descriptive statistics, such as median (Mdn) and interquartile range (IQR), of alignment measures (cross-correlation coefficients, time lag, and trace position error), as well as eye movement measures (saccade position accuracy and smooth-tracking quality) across observers in Experiments 1 and 2. To compare intuitive FOE tracking to chance, the Mdn, IQR, and SD of the baseline measures of tracking-at-chance were reported for reference. Furthermore, observers were categorized as intuitive trackers if their cross-correlation coefficients under free-viewing were at least 2 SD above the group Mdn. To investigate whether tracking depended on motion signal strength (high vs. low dot contrast, motion coherence, and global translational speed) and instruction (free viewing vs. tracking), we compared saccades and smooth tracking between conditions using the Friedman test and Wilcoxon signed-ranks test, respectively. When a factor with more than two levels was found significant, a post hoc Wilcoxon signed-ranks test adjusted for multiple comparisons by false-discovery rate was performed. Additional analyses that might be of interest to readers, such as the effect of time on task and optic flow directions (expansion vs. contraction) in Experiment 1, as well as the interaction between signal strength and instruction manipulations in Experiment 2, are provided in the Supplementary Materials. All statistical analyses were conducted in R (R Core Team, 2017). Non-parametric statistical tests were used because data were not normally distributed based on visual inspection. All result figures were produced using R package ggplot2 (Wickham, 2009). 
Results
Human observers intuitively track the FOE with their eyes
A comparison of eye position to FOE position shows good temporal and spatial correspondence between the eye and the FOE when observers viewed an optic flow stimulus of high motion-signal strength. Figure 2 shows eye position trajectories from two individual observers in one trial, and Figure 3 shows the tracking performance of all 43 observers. 
Figure 2.
 
Horizontal and vertical position trajectories of the FOE (gray line) and the eyes (orange line) of two representative observers (A and B; C and D), showing a good temporal and spatial correspondence between eyes and FOE. Panels on the right show comparisons of the spatial distribution of eye positions and FOE in the same trials depicted on the left. Each data point represents the average in a 100-ms interval, yielding a total of 300 data points plotted for each observer across a period of 30 seconds. Probability density plots of horizontal and vertical positions were plotted, respectively, above and to the right of panels B and D.
Figure 2.
 
Horizontal and vertical position trajectories of the FOE (gray line) and the eyes (orange line) of two representative observers (A and B; C and D), showing a good temporal and spatial correspondence between eyes and FOE. Panels on the right show comparisons of the spatial distribution of eye positions and FOE in the same trials depicted on the left. Each data point represents the average in a 100-ms interval, yielding a total of 300 data points plotted for each observer across a period of 30 seconds. Probability density plots of horizontal and vertical positions were plotted, respectively, above and to the right of panels B and D.
Figure 3.
 
A quantification of FOE-tracking behavior under free-viewing (n = 43) and explicit instruction (n = 24). Data from Experiment 1 and Experiment 2 are plotted as open and solid dots, respectively. Positions along the horizontal axis were jittered to reduce overlapping in data points. The horizontal black line shows the median across observers and experiments. The gray dashed line indicates the median baseline measures of tracking-at-chance (cross-correlation coefficient, 0.34; time lag, 3.1 seconds; trace position error, 8.0°). Asterisks indicate p values in pairwise comparisons based on the subset of observers completing both conditions in Experiment 2 (**p < 0.01, ***p < 0.001).
Figure 3.
 
A quantification of FOE-tracking behavior under free-viewing (n = 43) and explicit instruction (n = 24). Data from Experiment 1 and Experiment 2 are plotted as open and solid dots, respectively. Positions along the horizontal axis were jittered to reduce overlapping in data points. The horizontal black line shows the median across observers and experiments. The gray dashed line indicates the median baseline measures of tracking-at-chance (cross-correlation coefficient, 0.34; time lag, 3.1 seconds; trace position error, 8.0°). Asterisks indicate p values in pairwise comparisons based on the subset of observers completing both conditions in Experiment 2 (**p < 0.01, ***p < 0.001).
In terms of temporal alignment between eye and FOE, the eyes tracked the FOE with a median cross-correlation coefficient of 0.84 (IQR = 0.15) (Figure 3A) at a median time lag of 544 ms (IQR = 672 ms) (Figure 3B) across observers. This temporal alignment is reflected in the example observers’ eye position trajectories (Figures 2A and C), indicated by small systematic rightward shifts (s02, 746 ms; s28, 366 ms) along the time axis for the eye position trace. The shape of the position traces (how the position changed as a function of time) was highly similar across both eye and FOE positions, yielding a high (closer to one) cross-correlation coefficient (s02, 0.84; s28, 0.93). Both observers had a cross-correlation coefficient of at least 2 SD above the simulated baseline cross-correlation coefficient (Mdn = 0.34, SD = 0.18, IQR = 0.26). If we consider cross-correlation coefficients ≥ 0.7 (2 SD above median baseline measure) as indicators of good tracking, 36 out of 43 observers (84% of our sample) showed intuitive FOE tracking when given free-viewing instruction. 
In terms of spatial alignment, the median trace position error across all observers was 4.4° (IQR = 2.1°) (Figure 3C). Figure 2B and D reveal an overlap between two-dimensional eye position and FOE position in the two example observers, indicated by density plots showing the frequency distribution of horizontal and vertical eye and FOE positions, respectively. The spread of the eye horizontal position distributions matched the respective FOE horizontal position distributions well with >60% of overlap for the two selected observers (s02, 65%; s28, 75%), even though a simple permutation test revealed statistical differences between the distribution shapes for both observers (s02, p < 0.001; s28, p = 0.048). 
To investigate how observers achieved accurate FOE tracking, we analyzed saccades in more detail. Across experiments, observers made a median of 1.5 saccades/s (IQR = 0.63 saccades/s) with an amplitude of Mdn = 2.3° (IQR = 0.9°). Across observers, the majority (71%) of these saccades (equivalent to 0.9 saccades/s) landed within 5° of the FOE with a median position error of 3.4° (IQR = 2.5°). Congruently, the majority (64%) of all saccades were distance-minimizing saccades. These saccades caused a median position error reduction of 1.5° (IQR = 0.6°). For smooth-tracking eye movements, observers’ eyes moved slowly at an average velocity of 2.5°/s (IQR = 0.9°/s) and a dot velocity gain of 0.72 (IQR = 0.50). Both saccade accuracy and smooth-tracking quality were higher than baseline performance of tracking-at-chance (baseline saccade position error: Mdn = 7.5°, SD = 5.0°, IQR = 6.6°; baseline smooth-tracking velocity gain: Mdn = 0.43, SD = 0.92, IQR = 0.51). 
These results show that most observers intuitively tracked the FOE with high accuracy, suggested by the good temporal and spatial correspondence between the FOE and the eyes. Observers did so by directing their saccades at the FOE, indicated by a large proportion of saccades landing close to the FOE. This tracking performance was extracted from only 30 seconds of viewing time (see Supplementary Materials and Supplementary Figure S1). Tracking performance was comparable across optic flow directions (see Supplementary Materials and Supplementary Figure S2). 
Explicit instructions improve FOE–eye alignment and saccade accuracy
Explicit tracking instruction improved the overall alignment between the eyes and the FOE across all observers who participated in both instruction conditions (n = 24) (Figure 3, solid dots). The overall FOE–eye alignment improved by 8.1% with explicit instruction, indicated by an increase in cross-correlation coefficients (instruction: Mdn = 0.93, IQR = 0.07; free-viewing: Mdn = 0.86, IQR = 0.13; Z = 22, p < 0.001, r = 0.75) (Figure 3A). We also observed a 19% decrease in trace position error in trials with explicit instruction (Mdn = 3.9°, IQR = 1.8°) versus with free-viewing instruction (Mdn = 4.8°, IQR = 1.4°; Z = 247, p = 0.004, r = 0.57) (Figure 3C), and a 8.3% reduction in time lag, from 444 ms (IQR = 274 ms) with free-viewing instruction to 407 ms (IQR = 184 ms) with explicit instruction, which was not significant (Z = 195, p = 0.21, r = 0.26) (Figure 3B). These findings suggest that explicit instruction improved the overall mapping between the eyes and the FOE in time and in space, but not necessarily how quickly the eyes catch up with the FOE. 
Explicit instruction improved saccade accuracy. Under explicit instruction, 80% (IQR = 22%) of all saccades were FOE-targeting saccades with a median position error of 3.3° (IQR = 1.4°). During free viewing, observers made significantly fewer targeting saccades (Mdn = 58%, IQR = 30%; Z = 45, p = 0.002, r = 0.61), and saccade position error was significantly higher (Mdn = 4.0°, IQR = 1.9°, Z = 244, p = 0.006, r = 0.55), as shown in Figure 4A. Saccade rate was higher under explicit instruction (Mdn = 1.7 saccades/s, IQR = 0.5 saccades/s) than free viewing (Mdn = 1.4 saccades/s, IQR = 0.6 saccades/s; Z = 61.5, p = 0.04, r = 0.43). The percentage of distance-minimizing saccades was similar under free-viewing (Mdn = 69%, IQR = 17%) versus explicit instruction (Mdn = 75%, IQR = 10%; Z = 93, p = 0.11, r = 0.33). These saccades caused comparable position error reduction during free-viewing (Mdn = 1.6°, IQR = 0.5°) versus explicit instruction (Mdn = 1.6°, IQR = 0.8°; Z = 111, p = 0.28, r = 0.23). 
Figure 4.
 
The effect of explicit instruction on saccade position error (A) and smooth-tracking velocity gain (B) in Experiment 2. Individual data points are plotted in gray shapes with the positions jittered along the horizontal axis to reduce overlap. The horizontal black line shows the median across observers. The gray dashed line indicates the median baseline measures of tracking-at-chance (saccade position error, 7.5°; smooth-tracking velocity gain, 0.43). Asterisks indicate p-values in pairwise post hoc comparisons (**p < 0.01).
Figure 4.
 
The effect of explicit instruction on saccade position error (A) and smooth-tracking velocity gain (B) in Experiment 2. Individual data points are plotted in gray shapes with the positions jittered along the horizontal axis to reduce overlap. The horizontal black line shows the median across observers. The gray dashed line indicates the median baseline measures of tracking-at-chance (saccade position error, 7.5°; smooth-tracking velocity gain, 0.43). Asterisks indicate p-values in pairwise post hoc comparisons (**p < 0.01).
By contrast, smooth-tracking performance was similar across free-viewing and explicit instruction conditions as shown in Figure 4B. Under explicit instruction, observers’ eyes moved at a median velocity of 2.4°/s (IQR = 0.7°/s) and at a dot velocity gain of 0.50 (IQR = 0.30). During free viewing, the eye velocity was higher (Mdn = 2.7°/s, IQR = 0.7°/s; Z = 226, p = 0.03, r = 0.44) but not the gain (Mdn = 0.51, IQR = 0.22; Z = 138, p = 0.75, r = 0.07). 
Taken together, these results show that explicit instruction improved the correspondence between the eyes and the FOE via improvements in saccade accuracy, but not improvements in smooth tracking. The Supplementary Materials and Supplementary Figure S3 further show that the benefit of explicit instruction on saccade accuracy is more pronounced at most signal strength levels. 
Intuitive eye movements scale with stimulus signal strength
We next investigated effects of motion signal strength on the accuracy and quality of intuitive eye movements under free-viewing instruction. As shown in Figures 5A to C, saccade position error was generally reduced when motion signal strength increased, confirmed by a significant main effect of motion signal strength in all three stimulus manipulations: for coherence, χ2(4) = 19.8, p < 0.001, WKendall = 0.23; for contrast, χ2(4) = 11.6, p = 0.031, WKendall = 0.15; and for speed, χ2(4) = 34.4, p < 0.001, WKendall = 0.41. The percentage of distance-minimizing saccades also increased when coherence or speed increased: for coherence, χ2(4) = 11, p = 0.03, WKendall = 0.13; for speed, χ2(4) = 14.9, p = 0.005, WKendall = 0.18. The same did not hold true for contrast, χ2(4) = 7.8, p = 0.10, WKendall = 0.10. Similar to saccade position error, smooth-tracking velocity gain (Figures 5D to F) was modulated by motion signal strength in all stimulus manipulations: for coherence, χ2(4) = 45.5, p < 0.001, WKendall = 0.52; for contrast, χ2(4) = 32.6, p < 0.001, WKendall = 0.41; for speed, χ2(4) = 76.3, p < 0.001, WKendall = 0.91. 
Figure 5.
 
The influence of motion signal strength (horizontal axis) on saccade position error (top row) and smooth-tracking velocity gain (bottom row) in Experiment 2 under free-viewing, where motion coherence (A, D), dot contrast (B, E), and global translational speed (C, F) were manipulated. Individual data points are plotted in gray shapes with the positions jittered along the horizontal axis to reduce overlap. The horizontal black line shows the median across observers. The gray dashed line indicates the median baseline measures of tracking-at-chance (saccade position error, 7.5°; smooth-tracking velocity gain, 0.43). Asterisks indicate p values in pairwise post hoc comparisons (*p < 0.05, **p < 0.01, ***p < 0.001).
Figure 5.
 
The influence of motion signal strength (horizontal axis) on saccade position error (top row) and smooth-tracking velocity gain (bottom row) in Experiment 2 under free-viewing, where motion coherence (A, D), dot contrast (B, E), and global translational speed (C, F) were manipulated. Individual data points are plotted in gray shapes with the positions jittered along the horizontal axis to reduce overlap. The horizontal black line shows the median across observers. The gray dashed line indicates the median baseline measures of tracking-at-chance (saccade position error, 7.5°; smooth-tracking velocity gain, 0.43). Asterisks indicate p values in pairwise post hoc comparisons (*p < 0.05, **p < 0.01, ***p < 0.001).
A closer examination showed that saccade position error and velocity gain did not change parametrically at each step of motion signal strength. For motion coherence (Figure 5A), both measures showed the largest change at the highest level of motion coherence, reflected in a significant pairwise comparison when motion coherence increased from 50% to 100%. For dot contrast (Figure 5B), both measures showed the largest change at the lowest level of dot contrast, when dot contrast increased from 3.6% to 8.1%. For translational speed (Figure 5C), saccade accuracy increased and smooth-tracking quality decreased significantly at each increment of speed. 
These results show that saccade accuracy and smooth-tracking quality under free-viewing were sensitive to motion signal strength. In general, the higher the motion signal strength, the lower the saccade position error, as well as the higher the velocity gain. Our results also highlight differences in stimulus manipulations. Among the chosen stimulus values, changes in eye movement measures were most apparent when motion coherence was high and when dot contrast was low, whereas translational speed induced changes in eye movement measures across all speed levels. In the Supplementary Materials and Supplementary Figure S3, we show that the effects of motion signal strength were similar after explicit instruction, with changes in eye movement measures being more pronounced compared to free-viewing. These results suggest that eye movements are sensitive indicators of motion signal characteristics and might reflect more general motion processing mechanisms. 
Discussion
In this study, we characterized eye movement responses triggered by optic flow in a larger group of human observers and report the following key findings. First, over 80% of observers intuitively tracked a dynamically moving FOE. This tracking was achieved by a combination of saccades directed to the FOE and smooth-tracking eye movements in response to local dot motion near the fovea. Second, explicit instruction improved the overall correspondence between the eye and the FOE via improvements in saccade accuracy, but not the quality of smooth-tracking eye movements. Third, intuitive eye movement measures depended on motion signal strength. Taken together, these findings show that intuitive eye movements triggered by optic flow allow us to probe how the visual system processes dynamic visual scenes relevant to self-motion. In the following paragraphs, we discuss the consistency of this intuitive alignment between the eyes and the FOE with previous studies, how this alignment is achieved by using a combination of different types of eye movements, and its potential applications as sensitive indicators of performance in various dynamic tasks. 
The spatial and temporal alignment between eyes and FOE were consistent across studies
Self-motion, object motion, and eye movements interact to create a dynamic visual scene, including shifting of the FOE. In our study, observers used a combination of saccades and smooth tracking to align the fovea with the FOE with a median eye position error of 4.4°, congruent with previous research. For example, real-world eye-tracking studies have shown that human observers looked at the lead car or the center of the lane while driving (Mourant, Rockwell, & Rackoff, 1969; Rockwell, 1972), with 90% of the observed fixations occurring within 4° of the FOE. Using a saccade task, Hooge, Beintema, and van den Berg (1999) showed that observers’ saccade endpoints scattered within 4° of the FOE in diameter. When viewing an optic flow stimulus with a FOE shifting sinusoidally in the horizontal dimension, observers locked their gaze on average within 4.6° of the FOE about 70% of the time (Shirai & Imura, 2016). The similarity between spatial performance across studies suggests that our normative performance can be generalizable across stimuli (real-world visual scene vs. laboratory-generated optic flow), tasks (instructed saccade task vs. free-viewing task), and FOE movement types (predictable one-dimensional vs. unpredictable two-dimensional). Intuitive performance is highly relevant in understanding real-world behavior across settings. 
In the temporal domain, our analysis showed that under free-viewing instruction the eyes lagged behind the FOE by a median of 544 ms (considering the entire sample of naïve observers) or 444 ms (considering only the sample who completed the task under free-viewing and explicit tracking instruction), both of which are longer than similar measures reported in previous research. However, when considering a subset of data based on explicit instruction, the median time lag dropped to 407 ms, comparable to previous studies. For example, Knöll and colleagues (2018) reported in one human observer that the eyes tracked the FOE with a lag of 300 ms. Cornelissen and van den Dobbelsteen (1999) showed that a small sample of instructed observers tracked the FOE with a time lag of 300 to 450 ms. When an optic flow stimulus was briefly shown for ∼300 ms, observers reliably placed the cursor near the FOE (te Pas, Kappers, & Koenderink, 1998) and discriminated among stimuli of different heading directions (Crowell et al., 1990). Accounting for saccade dead time (when saccades cannot be altered), Hooge et al. (1999) estimated that the processing time of heading direction is 430 ms based on saccade latency and saccade landing error. Overall, these studies suggest that heading direction can be extracted in under 450 ms, consistent with what we found in trials in which observers were instructed to track. In sum, spatial and temporal alignment between the eyes and the FOE shows remarkable similarities with previous reports, suggesting that intuitive tracking can capture performance across settings. 
Saccades and pursuit jointly supported intuitive tracking
To achieve this consistent and intuitive tracking of the FOE, observers in our study exhibited a combination of saccades and smooth-tracking eye movements similar to previous reports (Angelaki & Hess, 2005; Knöll et al., 2018; Lappe et al., 1998; Lappi, Pekkanen, Rinkkala, Tuhkanen, Tuononen, & Virtanen, 2020). This combined behavior is aligned with the notion that these eye movements are complementary for visual tracking of unpredictable targets (Orban de Xivry & Lefèvre, 2007). Whereas the overall saccade rate in our study was lower than the saccade rate in other studies using free-viewing instructions of natural scenes (e.g., 2.9 saccades/s) (Otero-Millan, Troncoso, Macknik, Serrano-Pedraza, & Martinez-Conde, 2008), saccade endpoints were aligned with the FOE, yielding an observed percentage of FOE-targeting saccades similar to that of a previous study (Knöll et al., 2018). When observers were not making saccades, they exhibited smooth-tracking eye movements at low velocity. The gain of smooth-tracking (0.72 considering the entire sample of naïve observers) was similar to the gain (approximately 0.6) achieved in other studies for passive viewing of optic flow (Niemann et al., 1999). Nevertheless, we observed considerable differences in gain values across experiments (0.93 with unlimited dot lifetime in Experiment 1; 0.51 with a short dot lifetime of 47 ms in Experiment 2). The discrepancy might be partly attributed to the short dot lifetime and a larger variability of FOE location adopted in Experiment 2. The near-unity gain in Experiment 1 suggests that observers might have strategized to track local dots under certain situations (e.g., unlimited dot lifetime, low variability of FOE location). This performance was similar to the gain reported by Niemann et al. (1999) after explicitly instructing observers to follow local dot motion. The results obtained for combined saccade and pursuit tracking suggest that our saccadic eye movement system is highly responsive to global features of a complex visual stimulus, such as the FOE. Whereas the pursuit system is optimized for tracking moving objects, it plays merely a supplementary role in FOE tracking. Extracting the FOE is more challenging than tracking a small, moving dot with smooth pursuit or determining the direction of a homogeneously moving field with optokinetic nystagmus. Furthermore, the random shifts of FOE movement also mean that tracking cannot be perfect, because observers cannot predict when and where the next shift will be. These differences might explain why, in our study, even under explicit instruction, the eyes still deviated from the FOE position by 3.3° and tracked at a gain of only ∼0.50. 
Intuitive tracking was sensitive to stimulus features
Our study further indicates that intuitive eye movements to the FOE are sensitive to all three stimulus features being manipulated, albeit being affected to a different extent. These results resemble known characteristics of visual motion processing established in previous literature from instructed observers. A lower motion coherence indicates that a smaller proportion of dots is moving in a direction corresponding with the FOE. A lower translational speed indicates a lower dot speed at the periphery from the FOE. A lower dot contrast indicates a weaker local motion signal. All of these features simulate more challenging viewing conditions in daily life. In our study, FOE tracking was most accurate when the motion signal strength for all features was high; the stimulus range that most effectively modulated intuitive tracking varied based on the stimulus feature. Saccade accuracy and smooth-tracking quality increased incrementally at each translational speed level tested, which covered the typical locomotor speed range of human observers for walking (1.2 m/s) (Mohler, Thompson, Creem-Regehr, Pick, & Warren, 2007), running (2 m/s) (Thorstensson & Roberthson, 1987), and cycling (4.2 m/s) (Cornelissen & van den Dobbelsteen, 1999). These measures only changed when dot contrast increased from 3.6% to 8.1% but not at higher dot contrast levels, resembling previous evidence of early contrast saturation of motion processing (Allen et al., 2010; Edwards et al., 1996). The stimulus range that induced changes in eye movement measures for motion coherence was between 50% and 100%, but not at lower motion coherence levels. The higher motion coherence required to improve performance might be attributed to the short dot lifetime used in Experiment 2. 
Overall, these results imply that intuitive tracking is sensitive to changes in motion signal strength, reflecting the known overlap between brain areas responsible for visual motion processing and eye movement control, including the middle temporal (MT) area MT/V5+, medial superior temporal (MST) area, and ventral intraparietal (VIP) area. Areas that process sensory information related to visual motion include the MT (e.g., Hӓndel, Lutzenberger, Thier, & Haarmeier, 2007), and areas related to self-motion such as optic flow include the MST (e.g., Duffy & Wurtz, 1995) and VIP (e.g., Bremmer, Duhamel, Hamed, & Graf, 2002). For example, MT is sensitive to stimulus features such as motion coherence (Hӓndel et al., 2007), whereas neuronal responses in the MST and VIP are tuned to heading directions. A majority of MST dorsal neurons prefer a radial optic flow of a FOE within 45° of straight ahead (Duffy & Wurtz, 1995). This unique profile of MST neurons has been attributed to explaining why the performance of behavioral and neural decoding of heading discrimination is superior around straight ahead (Gu et al., 2010). The same areas are also involved in the control of smooth-pursuit eye movements (Lisberger, 2015) and saccades to moving targets (Newsome, Wurtz, Dürsteler, & Mikami, 1985). In the absence of explicit instructions or other tasks, looking at the FOE might generate a radial flow of visual information that is in line with what we experience in everyday life, where the retinal FOE is centered to straight ahead (Matthis, Muller, Bonnen, & Hayhoe, 2020). 
Intuitive tracking has potential applications
Eye movements are often utilized as a window of perception for preverbal and nonverbal observers, such as preverbal infants (Dobson & Teller, 1978) and nonverbal animals (Douglas, Alam, Silver, McGill, Tschetter, & Prusky, 2005). Others have combined eye movements and explicit instruction to capture visual functions (e.g., Dakin & Turnbull, 2016; Mooney, Hill, Tuzun, Alam, Carmel, & Prusky, 2018). For example, Mooney et al. (2018) adaptively reduced the contrast of a visual Gabor stimulus until the observers stopped tracking the target, producing a contrast sensitivity function in 5 minutes. Asking observers to track a visual target using a cursor can also reveal observers’ visual sensitivity (Bonnen, Burge, Yates, Pillow, & Cormack, 2015). However, these assessment methods still require giving explicit instructions to observers about what to follow. In the current study, we quantified the benefit of explicit instruction on intuitive eye movements and showed that intuitive eye movements share characteristics with instructed eye movements, supporting the use of intuitive eye movements to study visual processing in a broader population. 
These intuitive eye movements would be useful to assess visual functions in patients where verbal understanding might be limited (e.g., patients with neurocognitive disorders). For example, previous research has manipulated dot density, number of dots, and size of the dot field to investigate spatial integration of visual motion (Burr, Morrone, & Vaina, 1998; Warren, Morris, & Kalish, 1988). Manipulating these stimulus features while observers engage in intuitive tracking might reveal deficits in spatial integration. The effect of visual field defects could be investigated by restricting the stimulus area to central versus peripheral vision, analogous to Cornelissen and van den Dobbelsteen (1999). Furthermore, aging and visual disorders such as glaucoma are associated with impairments in perceiving and discriminating the direction and speed of moving objects (Bennett, Sekuler, & Sekuler, 2007; Falkenberg & Bex, 2007; Shanidze & Verghese, 2019; Snowden & Kavanagh, 2006), especially under low contrast (Allen et al., 2010). Such impairments can impair vision-related quality of life (Roh, Selivanova, Shin, Miller, & Jackson, 2018) or driving safety (Wood, Black, Mallon, Kwan, & Owsley, 2018). Given that FOE-tracking performance can be measured in only 30 seconds of viewing time, any potential assessment tool could be limited to short viewing time and therefore be applicable in a clinical setting. 
Some questions remain to be addressed before the applications of intuitive tracking. First, when do observers intuitively track the FOE? Our research did not assess the situational constraints of intuitive FOE tracking; for example, task demands might suppress FOE tracking. Churan, von Hopffgarten, and Bremmer (2018) showed that when viewing optic flow stimulus to encode distance traveled, observers showed preferences as to whether they should sample near or farther away from the FOE. Combining an optic flow stimulus with a steering task, Lakshminarasimhan, Avila, Neyhart, DeAngelis, Pitkow, and Angelaki (2020) showed that observers’ eyes tracked an invisible goal for steering rather than the FOE. In real-world eye-tracking during walking or driving, the eyes could track future landing positions of the feet (e.g., Hollands, Marple-Horvat, Henkes, & Rowan, 1995; Matthis, Yate, & Hayhoe, 2018) or the inner side of the curve before a bend (e.g., Land & Lee, 1994). These studies suggest that FOE tracking might not be reflexive. It is possible that only in the absence of other tasks, intuitive FOE tracking occurs. 
Second, where do the inter-individual differences in intuitive tracking originate, among those who track? To investigate this, researchers should establish whether inter-individual differences are reliable and whether they could be explained by factors such as perceptual sensitivity or attention. Whereas previous research has shown a relationship between instructed tracking and psychophysical judgment of the same visual stimulus (Bonnen et al., 2015; Mooney et al., 2018), the relation between intuitive tracking and perceptual performance remains to be tested. 
Conclusions
Many daily functions are critically dependent on our ability to perceive and quickly respond to events in complex visual scenes, such as heading direction changes induced by self-motion. Our work shows that human observers track FOE changes even when they are not explicitly instructed to do so, suggesting that this tracking behavior is intuitive. The ability to keep the eyes aligned with a shifting FOE might serve as an important gaze stabilization strategy during self-motion, facilitating control of the body during natural locomotion and compensating for the unstable flow experienced when the head moves rhythmically during a gait cycle (Matthis et al., 2020). Furthermore, we have shown that this intuitive tracking behavior shares many characteristics (e.g., overall alignment and sensitivity to motion-signal-strength manipulation) with instructed performance, supporting the use of intuitive eye movements as sensitive indicators of visual motion processing. 
Acknowledgments
The authors thank the two anonymous reviewers and members of the Oculomotor Lab for their constructive and detailed comments on an earlier version of this manuscript. 
Supported by a Michael Smith Foundation for Health Research Trainee Award and a Canadian Institutes of Health Research Fellowship to HMC, a Natural Sciences and Engineering Research Council of Canada Discovery Grant and Accelerator Supplement to MS, and a Peter Wall Institute for Advanced Studies Wall Solutions Grant awarded to MS. 
Commercial relationships: none. 
Corresponding author: Hiu Mei Chow. 
Email: doris.hm.chow@ubc.ca. 
Address: Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada. 
References
Allen, H. A., Hutchinson, C. V., Ledgeway, T., & Gayle, P. (2010). The role of contrast sensitivity in global motion processing deficits in the elderly. Journal of Vision, 10(10):15, 1–10, https://doi.org/10.1167/10.10.15. [CrossRef]
Angelaki, D. E., & Hess, B. J. M. (2005). Self-motion-induced eye movements: Effects on visual acuity and navigation. Nature Reviews Neuroscience, 6(12), 966–976. [CrossRef]
Bardy, B. G., Warren, W. H., & Kay, B. A. (1996). Motion parallax is used to control postural sway during walking. Experimental Brain Research, 111(2), 271–282. [CrossRef]
Bardy, B. G., Warren, W. H., & Kay, B. A. (1999). The role of central and peripheral vision in postural control during walking. Perception & Psychophysics, 61(7), 1356–1368. [CrossRef]
Bennett, P. J., Sekuler, R., & Sekuler, A. B. (2007). The effects of aging on motion detection and direction identification. Vision Research, 47(6), 799–809. [CrossRef]
Bhagavatula, P. S., Claudianos, C., Ibbotson, M. R., & Srinivasan, M. V. (2011). Optic flow cues guide flight in birds. Current Biology, 21(21), 1794–1799. [CrossRef]
Bonnen, K., Burge, J., Yates, J., Pillow, J., & Cormack, L. K. (2015). Continuous psychophysics: Target-tracking to measure visual sensitivity. Journal of Vision, 15(3):14, 1–16, https://doi.org/10.1167/15.3.14. [CrossRef]
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10(4), 433–436. [CrossRef]
Bremmer, F., Duhamel, J.-R., Hamed, S. B., & Graf, W. (2002). Heading encoding in the macaque ventral intraparietal area (VIP). European Journal of Neuroscience, 16(8), 1554–1568. [CrossRef]
Britten, K. H., & Van Wezel, R. J. A. (2002). Area MST and heading perception in macaque monkeys. Cerebral Cortex, 12(7), 692–701. [CrossRef]
Burr, D. C., Morrone, M. C., & Vaina, L. M. (1998). Large receptive fields for optic flow detection in humans. Vision Research, 38(12), 1731–1743. [CrossRef]
Burr, D. C., & Santoro, L. (2001). Temporal integration of optic flow, measured by contrast and coherence thresholds. Vision Research, 41(15), 1891–1899. [CrossRef]
Churan, J., von Hopffgarten, A., & Bremmer, F. (2018). Eye movements during path integration. Physiological Reports, 6(22), e13921. [CrossRef]
Cornelissen, F. W., & van den Dobbelsteen, J. J. (1999). Heading detection with simulated visual field defects. Visual Impairment Research, 1(2), 71–84. [CrossRef]
Crowell, J. A., & Banks, M. S. (1993). Perceiving heading with different retinal regions and types of optic flow. Perception & Psychophysics, 53(3), 325–337. [CrossRef]
Crowell, J. A., Royden, C. S., Banks, M. S., Swenson, K. H., & Sekuler, A. B. (1990). Optic flow and heading judgements. Investigative Ophthalmology & Visual Sciences, 31(Suppl.), 522.
Dakin, R., Fellows, T. K., & Altshuler, D. L. (2016). Visual guidance of forward flight in hummingbirds reveals control based on image features instead of pattern velocity. Proceedings of the National Academy of Sciences, USA, 113(31), 8849–8854. [CrossRef]
Dakin, S. C., & Turnbull, P. R. K. (2016). Similar contrast sensitivity functions measured using psychophysics and optokinetic nystagmus. Scientific Reports, 6(1), 34514. [CrossRef]
Dobson, V., & Teller, D. Y. (1978). Visual acuity in human infants: A review and comparison of behavioral and electrophysiological studies. Vision Research, 18(11), 1469–1483. [CrossRef]
Douglas, R. M., Alam, N. M., Silver, B. D., McGill, T. J., Tschetter, W. W., & Prusky, G. T. (2005). Independent visual threshold measurements in the two eyes of freely moving rats and mice using a virtual-reality optokinetic system. Visual Neuroscience, 22(5), 677–684. [CrossRef]
Duffy, C., & Wurtz, R. (1995). Response of monkey MST neurons to optic flow stimuli with shifted centers of motion. The Journal of Neuroscience, 15(7), 5192. [CrossRef]
Eastman, K. M., & Huk, A. C. (2012). PLDAPS: A hardware architecture and software toolbox for neurophysiology requiring complex visual stimuli and online behavioral control. Frontiers in Neuroinformatics, 6, 1. [CrossRef]
Edwards, M., Badcock, D. R., & Nishida, S. (1996). Contrast sensitivity of the motion system. Vision Research, 36(16), 2411–2421. [CrossRef]
Falkenberg, H. K., & Bex, P. J. (2007). Sources of motion-sensitivity loss in glaucoma. Investigative Opthalmology & Visual Science, 48(6), 2913. [CrossRef]
Gibson, J. J. (1950). The perception of the visual world. Boston, MA: Houghton Mifflin.
Gibson, J. J. (1958). Visually controlled locomotion and visual orientation in animals. British Journal of Psychology, 49(3), 182–194. [CrossRef]
Gu, Y., Fetsch, C. R., Adeyemo, B., DeAngelis, G. C., & Angelaki, D. E. (2010). Decoding of MSTd population activity accounts for variations in the precision of heading perception. Neuron, 66(4), 596–609. [CrossRef]
Händel, B., Lutzenberger, W., Thier, P., & Haarmeier, T. (2007). Opposite dependencies on visual motion coherence in human area MT+ and early visual cortex. Cerebral Cortex, 17(7), 1542–1549. [CrossRef]
Hollands, M. A., Marple-Horvat, D. E., Henkes, S., & Rowan, A. K. (1995). Human eye movements during visually guided stepping. Journal of Motor Behavior, 27(2), 155–163. [CrossRef]
Hooge, I. T., Beintema, J. A., & van den Berg, A. V. (1999). Visual search of heading direction. Experimental Brain Research, 129(4), 615–628. [CrossRef]
Kleiner, M., Brainard, D., & Pelli, D. (2007). What's new in Psychtoolbox 3? Perception, 36(14), 1–14.
Knöll, J., Pillow, J. W., & Huk, A. C. (2018). Lawful tracking of visual motion in humans, macaques, and marmosets in a naturalistic, continuous, and untrained behavioral context. Proceedings of the National Academy of Sciences, USA, 115(44), E10486–E10494. [CrossRef]
Lakshminarasimhan, K. J., Avila, E., Neyhart, E., DeAngelis, G. C., Pitkow, X., & Angelaki, D. E. (2020). Tracking the mind's eye: Primate gaze behavior during virtual visuomotor navigation reflects belief dynamics. Neuron, 106(4), 662–674.e5. [CrossRef]
Land, M. F., & Lee, D. N. (1994). Where we look when we steer. Nature, 369(6483), 742–744. [CrossRef]
Lappe, M., Bremmer, F., & van den Berg, A. V. (1999). Perception of self-motion from visual flow. Trends in Cognitive Sciences, 3(9), 329–336. [CrossRef]
Lappe, M., Pekel, M., & Hoffmann, K.-P. (1998). Optokinetic eye movements elicited by radial optic flow in the macaque monkey. Journal of Neurophysiology, 79(3), 1461–1480. [CrossRef]
Lappe, M., Pekel, M., & Hoffmann, K.-P. (1999). Properties of saccades during optokinetic responses to radial optic flow in monkeys. In Becker, W., Deubel, H., & Mergner, T. (Eds.), Current oculomotor research: Physiological and psychological aspects (pp. 45–52). New York: Springer Science+Business Media.
Lappi, O., Pekkanen, J., Rinkkala, P., Tuhkanen, S., Tuononen, A., & Virtanen, J.-P. (2020). Humans use optokinetic eye movements to track waypoints for steering. Scientific Reports, 10(1), 4175. [CrossRef]
Li, L., & Niehorster, D. C. (2014). Influence of optic flow on the control of heading and target egocentric direction during steering toward a goal. Journal of Neurophysiology, 112(4), 766–777. [CrossRef]
Lisberger, S. G. (2015). Visual guidance of smooth pursuit eye movements. Annual Review of Vision Science, 1(1), 447–468. [CrossRef]
Matthis, J. S., Muller, K. S., Bonnen, K., & Hayhoe, M. M. (2020). Retinal optic flow during natural locomotion. BioRxiv, https://doi.org/10.1101/2020.07.23.217893.
Matthis, J. S., Yates, J. L., & Hayhoe, M. M. (2018). Gaze and the control of foot placement when walking in natural terrain. Current Biology, 28(8), 1224–1233.e5. [CrossRef]
Mohler, B. J., Thompson, W. B., Creem-Regehr, S. H., Pick, H. L., & Warren, W. H. (2007). Visual flow influences gait transition speed and preferred walking speed. Experimental Brain Research, 181(2), 221–228. [CrossRef]
Mooney, S. W. J., Hill, N. J., Tuzun, M. S., Alam, N. M., Carmel, J. B., & Prusky, G. T. (2018). Curveball: A tool for rapid measurement of contrast sensitivity based on smooth eye movements. Journal of Vision, 18(12):7, 1–19, https://doi.org/10.1167/18.12.7. [CrossRef]
Mourant, R. R., Rackoff, N. J., & Rockwell, T. H. (1969). Drivers’ eye movements and visual workload. Highway Research Record, 292, 1–10.
Newsome, W. T., Wurtz, R. H., Dürsteler, M. R., & Mikami, A. (1985). Deficits in visual motion processing following ibotenic acid lesions of the middle temporal visual area of the macaque monkey. The Journal of Neuroscience, 5(3), 825–840. [CrossRef]
Niemann, T., Lappe, M., Büscher, A., & Hoffmann, K.-P. (1999). Ocular responses to radial optic flow and single accelerated targets in humans. Vision Research, 39(7), 1359–1371. [CrossRef]
Orban de Xivry, J.-J., & Lefèvre, P. (2007). Saccades and pursuit: Two outcomes of a single sensorimotor process. The Journal of Physiology, 584(1), 11–23. [CrossRef]
Otero-Millan, J., Troncoso, X. G., Macknik, S. L., Serrano-Pedraza, I., & Martinez-Conde, S. (2008). Saccades and microsaccades during visual fixation, exploration, and search: Foundations for a common saccadic generator. Journal of Vision, 8(14):21, 1–18, https://doi.org/10.1167/8.14.21. [CrossRef]
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10(4), 437–442. [CrossRef]
R Core Team. (2017). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.
Rockwell, T. H. (1972). Eye movement analysis of visual information acquisition in driving: An overview. Proceedings of the Australian Road Research Board, 6(3), 316–331.
Roh, M., Selivanova, A., Shin, H. J., Miller, J. W., & Jackson, M. L. (2018). Visual acuity and contrast sensitivity are two important factors affecting vision-related quality of life in advanced age-related macular degeneration. PLoS One, 13(5), e0196481. [CrossRef]
Shanidze, N., & Verghese, P. (2019). Motion perception in central field loss. Journal of Vision, 19(14):20, 1–15, https://doi.org/10.1167/19.14.20. [CrossRef]
Shirai, N., & Imura, T. (2016). Infant-specific gaze patterns in response to radial optic flow. Scientific Reports, 6(1), 34734. [CrossRef]
Snowden, R. J., & Kavanagh, E. (2006). Motion perception in the ageing visual system: Minimum motion, motion coherence, and speed discrimination thresholds. Perception, 35(1), 9–24. [CrossRef]
Spering, M., & Carrasco, M. (2015). Acting without seeing: Eye movements reveal visual processing without awareness. Trends in Neurosciences, 38(4), 247–258. [CrossRef]
Spering, M., & Chow, H. M. (2018). Rapid assessment of natural visual motion integration across primate species. Proceedings of the National Academy of Sciences, USA, 115(44), 11112–11114. [CrossRef]
Srinivasan, M. V., Zhang, S., Altwein, M., & Tautz, J. (2000). Honeybee navigation: Nature and calibration of the “odometer.” Science, 287(5454), 851–853. [CrossRef]
Tavassoli, A., & Ringach, D. L. (2010). When your eyes see more than you do. Current Biology, 20(3), R93–R94. [CrossRef]
te Pas, S. F., Kappers, A. M. L., & Koenderink, J. J. (1998). Locating the singular point in first-order optical flow fields. Journal of Experimental Psychology: Human Perception and Performance, 24(5), 1415–1430. [CrossRef]
Thorstensson, A., & Roberthson, H. (1987). Adaptations to changing speed in human locomotion: Speed of transition between walking and running. Acta Physiologica Scandinavica, 131(2), 211–214. [CrossRef]
Vaina, L. M., Beardsley, S. A., & Rushton, S. K. (Eds.). (2004). Optic flow and beyond. Amsterdam: Springer.
Warren, W. H., & Hannon, D. J. (1988). Direction of self-motion is perceived from optical flow. Nature, 336(6195), 162–163. [CrossRef]
Warren, W. H., Kay, B. A., Zosh, W. D., Duchon, A. P., & Sahuc, S. (2001). Optic flow is used to control human walking. Nature Neuroscience, 4(2), 213–216. [CrossRef]
Warren, W. H., & Kurtz, K. J. (1992). The role of central and peripheral vision in perceiving the direction of self-motion. Perception & Psychophysics, 51(5), 443–454. [CrossRef]
Warren, W. H., Morris, M. W., & Kalish, M. (1988). Perception of translational heading from optical flow. Journal of Experimental Psychology: Human Perception and Performance, 14(4), 646–660. [CrossRef]
Wickham, H. (2009). ggplot2: Elegant graphics for data analysis. New York: Springer.
Wood, J. M., Black, A. A., Mallon, K., Kwan, A. S., & Owsley, C. (2018). Effects of age-related macular degeneration on driving performance. Investigative Ophthalmology & Visual Science, 59(1), 273. [CrossRef]
Figure 1.
 
An example stimulus (dots) superimposed with arrows to indicate the direction (arrow head) and velocity (arrow length) of local dot movement radiating from the FOE (white/black open circle).
Figure 1.
 
An example stimulus (dots) superimposed with arrows to indicate the direction (arrow head) and velocity (arrow length) of local dot movement radiating from the FOE (white/black open circle).
Figure 2.
 
Horizontal and vertical position trajectories of the FOE (gray line) and the eyes (orange line) of two representative observers (A and B; C and D), showing a good temporal and spatial correspondence between eyes and FOE. Panels on the right show comparisons of the spatial distribution of eye positions and FOE in the same trials depicted on the left. Each data point represents the average in a 100-ms interval, yielding a total of 300 data points plotted for each observer across a period of 30 seconds. Probability density plots of horizontal and vertical positions were plotted, respectively, above and to the right of panels B and D.
Figure 2.
 
Horizontal and vertical position trajectories of the FOE (gray line) and the eyes (orange line) of two representative observers (A and B; C and D), showing a good temporal and spatial correspondence between eyes and FOE. Panels on the right show comparisons of the spatial distribution of eye positions and FOE in the same trials depicted on the left. Each data point represents the average in a 100-ms interval, yielding a total of 300 data points plotted for each observer across a period of 30 seconds. Probability density plots of horizontal and vertical positions were plotted, respectively, above and to the right of panels B and D.
Figure 3.
 
A quantification of FOE-tracking behavior under free-viewing (n = 43) and explicit instruction (n = 24). Data from Experiment 1 and Experiment 2 are plotted as open and solid dots, respectively. Positions along the horizontal axis were jittered to reduce overlapping in data points. The horizontal black line shows the median across observers and experiments. The gray dashed line indicates the median baseline measures of tracking-at-chance (cross-correlation coefficient, 0.34; time lag, 3.1 seconds; trace position error, 8.0°). Asterisks indicate p values in pairwise comparisons based on the subset of observers completing both conditions in Experiment 2 (**p < 0.01, ***p < 0.001).
Figure 3.
 
A quantification of FOE-tracking behavior under free-viewing (n = 43) and explicit instruction (n = 24). Data from Experiment 1 and Experiment 2 are plotted as open and solid dots, respectively. Positions along the horizontal axis were jittered to reduce overlapping in data points. The horizontal black line shows the median across observers and experiments. The gray dashed line indicates the median baseline measures of tracking-at-chance (cross-correlation coefficient, 0.34; time lag, 3.1 seconds; trace position error, 8.0°). Asterisks indicate p values in pairwise comparisons based on the subset of observers completing both conditions in Experiment 2 (**p < 0.01, ***p < 0.001).
Figure 4.
 
The effect of explicit instruction on saccade position error (A) and smooth-tracking velocity gain (B) in Experiment 2. Individual data points are plotted in gray shapes with the positions jittered along the horizontal axis to reduce overlap. The horizontal black line shows the median across observers. The gray dashed line indicates the median baseline measures of tracking-at-chance (saccade position error, 7.5°; smooth-tracking velocity gain, 0.43). Asterisks indicate p-values in pairwise post hoc comparisons (**p < 0.01).
Figure 4.
 
The effect of explicit instruction on saccade position error (A) and smooth-tracking velocity gain (B) in Experiment 2. Individual data points are plotted in gray shapes with the positions jittered along the horizontal axis to reduce overlap. The horizontal black line shows the median across observers. The gray dashed line indicates the median baseline measures of tracking-at-chance (saccade position error, 7.5°; smooth-tracking velocity gain, 0.43). Asterisks indicate p-values in pairwise post hoc comparisons (**p < 0.01).
Figure 5.
 
The influence of motion signal strength (horizontal axis) on saccade position error (top row) and smooth-tracking velocity gain (bottom row) in Experiment 2 under free-viewing, where motion coherence (A, D), dot contrast (B, E), and global translational speed (C, F) were manipulated. Individual data points are plotted in gray shapes with the positions jittered along the horizontal axis to reduce overlap. The horizontal black line shows the median across observers. The gray dashed line indicates the median baseline measures of tracking-at-chance (saccade position error, 7.5°; smooth-tracking velocity gain, 0.43). Asterisks indicate p values in pairwise post hoc comparisons (*p < 0.05, **p < 0.01, ***p < 0.001).
Figure 5.
 
The influence of motion signal strength (horizontal axis) on saccade position error (top row) and smooth-tracking velocity gain (bottom row) in Experiment 2 under free-viewing, where motion coherence (A, D), dot contrast (B, E), and global translational speed (C, F) were manipulated. Individual data points are plotted in gray shapes with the positions jittered along the horizontal axis to reduce overlap. The horizontal black line shows the median across observers. The gray dashed line indicates the median baseline measures of tracking-at-chance (saccade position error, 7.5°; smooth-tracking velocity gain, 0.43). Asterisks indicate p values in pairwise post hoc comparisons (*p < 0.05, **p < 0.01, ***p < 0.001).
Table 1.
 
A summary of analysis measures to describe the temporal and spatial accuracy of eye movements relative to the FOE.
Table 1.
 
A summary of analysis measures to describe the temporal and spatial accuracy of eye movements relative to the FOE.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×