Safe driving demands continual visual awareness of the environment around the vehicle. Distraction can impact both the driver’s level of awareness and ability to maintain operational control. Nearly any task that distracts a driver has negative consequences for safe vehicle operation (Victor et al., 2015; K. Young & Regan, 2007). Although much of the work in driver distraction has explained the dangers of distraction in terms of drivers’ need to divide their attention (Strayer & Drews, 2007; Strayer et al., 2015), some recent work has suggested a more complex account (Seaman et al., 2017). In exploring the question of distraction, it is important to consider how the driver’s perception of the environment changes as a consequence of the distracting task, since many such tasks (e.g., changing the radio station, responding to a text message, or looking at directions on a smartphone) require the driver to move the point of gaze away from the forward roadway. These tasks inherently require drivers to use peripheral vision, which may put them at a disadvantage when it comes to noticing changes on the road ahead (Wolfe, Dobres, Rosenholtz, & Reimer, 2017).

Therefore, our goal in this work was to disentangle the perceptual and attentional factors in distracted driving. To do this, we needed to consider how distraction had been studied in this context. Previous work in this area can be classified into three general approaches. The first is to distract the driver with a driving-irrelevant task in the periphery (e.g., detecting a colored dot imposed on the scene) and to measure how this peripheral distractor impacts driving-relevant behavior (cf. Bian, Kang, & Andersen, 2010; Crundall, Underwood, & Chapman, 2002; Miura, 1986). Focusing on driving-relevant behavior, such as staying in the correct lane, is sensible, but in these experiments, the driver is free to view the road environment centrally and has no reason to allocate resources to a task that isn’t relevant to driving. A second approach is to place the driving-relevant targets in the periphery (e.g., by instructing participants to fixate the dashboard), without manipulating cognitive load (Lamble, Laakso, & Summala, 1999; Yoshitsugu, Ito, & Asoh, 2000). This addresses the perceptual question we are interested in, without considering the impact of distraction. A third approach, and the one used in the present study, is to set the driver a driving-relevant detection task in the periphery—for instance, detecting brake lights—and to manipulate their level of distraction with an irrelevant task at fixation. This allows the experimenter to separate the impact of cognitive load from the perceptual consequences of the driver relying on more eccentric vision (i.e., stimuli farther away from the current point of gaze). Perhaps the most relevant work in this area has been by Cooper and colleagues, who considered the operational consequences (e.g., the impact on vehicle control) of requiring drivers to acquire relevant information peripherally, but they did not use perceptual measures, such as drivers’ accuracy in detecting brake lights (Cooper, Medeiros-Ward, & Strayer, 2013). Therefore, our question remains unanswered: For tasks that require drivers to look away from the road, to what extent is increased cognitive load, as compared to increased eccentricity, responsible for drivers’ failures to notice driving-relevant changes in their environment?

To discuss the existing literature in this area in more detail, we first consider tasks that require subjects to detect driving-irrelevant targets. These tasks are often interpreted as supporting a reduction in drivers’ ability to monitor the road environment, even though the tasks are orthogonal to the task of driving. For example, Miura’s (1986) study placed drivers in a simulator and asked them to detect transiently presented peripheral stimuli, irrelevant to the driving task, while driving the simulator. Drivers were slower to report more eccentric targets, depending on the environment and the load it imposed, and could only maintain a given level of performance if the irrelevant targets were presented more centrally (Miura, 1986). Crundall et al. (2002) reported analogous results in a desktop experiment, using clips of road video and a task that required subjects to detect irrelevant eccentric stimuli superimposed on the video. These researchers also observed reduced detection performance for the irrelevant stimuli as a function of demand, although this was attenuated by driver experience. Echoing these results, Bian et al. (2010) performed a similar task using simulated road video and a superimposed irrelevant detection task, and found a similar reduction in peripheral detection performance under load. This approach, of relying on task-irrelevant targets in driving, has been widely adopted in driving research in the form of peripheral detection tasks, or PDTs (Martens & van Winsum, 2000), in which a single light is placed on the dashboard, and the driver is asked to report when it illuminates, as a measure of perception and attention. However, the decrements observed by Bian, Crundall, and Miura, respectively, are not in complete agreement with research focused on detecting and responding to driving-relevant changes in the environment, as opposed to detecting stimuli that can be ignored without operational consequences, such as flashed lights in the periphery.

A smaller body of work has investigated the question of what drivers can perceive of the road scene when they move their eyes away from the forward road—that is, what driving-relevant changes they can pick up on with peripheral vision. Perhaps the best known work in this area is by Lamble et al. (1999), who asked drivers to maintain fixation at a range of different locations within a vehicle, while following a lead vehicle on-road and detecting when that vehicle braked. Drivers in this experiment who had to use peripheral vision to monitor the lead vehicle performed significantly worse than when they were able to look at the road ahead. Similar work, which also required subjects to detect other vehicles on the road, was done to determine the safest position for center console displays—that is, how low or high a built-in global positioning system (GPS) device should be mounted (Yoshitsugu et al., 2000). Yoshitsugu and colleagues found that lower mounting positions, which forced the driver to rely on more peripheral visual input to monitor the scene, reduced the driver’s ability to detect other vehicles on the road. Both studies used driving-relevant peripheral tasks, since drivers needed to notice what the vehicle ahead of them did, but the fixation tasks only ensured a specific point of gaze, rather than, for example, manipulating the difficulty of the fixation task and additionally examining the impact of cognitive load on peripheral detection.

The study that has come closest to our question is one by Cooper et al. (2013), in which they examined performance on a driving-relevant task (controlling a simulated vehicle) while changing the fixation position and manipulating the difficulty of a driving-irrelevant secondary task. Interestingly, and in contrast to research that has examined the effect of cognitive load on detecting driving-irrelevant stimuli (cf. the work of Bian, Miura, and Crundall, as we have just discussed), they found an improvement in lane-keeping performance under higher levels of cognitive load. Although their results don’t directly address what drivers can perceive with peripheral vision while distracted, they do suggest that drivers can use the information they do acquire to maintain (and even improve) control. Tangentially related to this body of work is a study by Gaspar et al. (2016) on the gaze-contingent Useful Field of View. They aimed to assess drivers’ ability to identify peripherally presented stimuli as they controlled a simulated vehicle, with these stimuli presented at different eccentricities, but always relative to where the driver was currently looking and under different levels of cognitive load. Although this would seem to address the gap we mentioned at the start, several issues remained. First, this study used, as one dependent measure, a task involving driving-irrelevant stimuli, and found a weak effect of cognitive load. In addition, preferencing control over the stimuli to realism, it used simulated environments, eccentricity-scaled stimuli, and brief, masked eccentric targets, all of which might have significant effects on perception. Together, the body of driving research on cognitive load, peripheral vision, and distraction suggests that the problem is more complicated than it might first appear, and the conclusions that can be drawn seem to rely strongly on whether the task is relevant to the driver.

Complicating the question of cognitive load and peripheral vision still further, a significant body of research in driving has shown that task demands can change the pattern of drivers’ eye movements (cf. a reduction in the spread of fixations; Nunes & Recarte, 2002; Reimer, 2009; Reimer, Mehler, Wang, & Coughlin, 2012; Tsai, Viirre, Strychacz, Chase, & Jung, 2007; Victor, Harbluk, & Engström, 2005). These changes in eye movement behavior have been interpreted to reflect a reduction in drivers’ ability to search the scene (Mourant & Rockwell, 1972) and acquire the information they need in order to maintain control. However, a consequence is that many common cognitive-load tasks (e.g., the audio–verbal n-back task) might impact performance on driving-relevant or driving-irrelevant tasks by changing the location of relevant information in the driver’s field of view. Given our interest in the perceptual consequences of distraction and cognitive load, we needed to control where the driver was looking and to manipulate load within that constraint.

Having considered the perceptual factors, and discussed how they have been previously studied in the driving literature, we will now move on to attentional factors, in both driving and vision science, as they pertain to the question of what drivers might be able to detect in the periphery when distracted. The question of distraction has been a central one for driving research and driver safety in recent years, particularly with the advent of smartphones. The consensus is that using a smartphone while driving increases operational errors (McWilliams, Reimer, Mehler, & Dobres, 2015; Reimer, Mehler, & Donmez, 2014; Reimer, Mehler, Reagan, Kidd, & Dobres, 2016; Samost et al., 2016; Strayer, Cooper, & Drews, 2004; Strayer & Drews, 2007; Strayer, Drews, & Crouch, 2006; Strayer, Drews, & Johnston, 2003), and this is broadly interpreted as being a result of the driver’s need to divide their attention between the phone and the road environment. Other recent changes in the vehicle have similar consequences, in particular the shift from manual switches to touchscreens, which require the driver to look at and attend to them in order to change settings (Chiang, Brooks, & Weir, 2001; Kidd, Dobres, Reagan, Mehler, & Reimer, 2017; Lee, Mehler, Reimer, & Coughlin, 2016; Strayer, Cooper, Turrill, Coleman, & Hopman, 2016; Tsimhoni, Smith, & Green, 2004; Watson & Strayer, 2010). Although some work has explicitly considered the shifts in gaze that are intrinsic to both using smartphones in the vehicle and modern vehicle controls (Drews, Yazdani, Godfrey, Cooper, & Strayer, 2009; Sawyer, Finomore, Calvo, & Hancock, 2014), their interpretations of their results have remained focused on attention and its presumed control over awareness. In contrast, we consider what the driver can or cannot do with peripheral vision, and the impact of distraction and concomitant cognitive load on the driver’s ability to detect changes in the environment.

Moving beyond the driving literature specifically, the vision science literature has examined similar questions in the context of visual attention, although the degree to which these findings directly translate to driver behavior is unclear. The phenomenon of inattentional blindness (Mack & Rock, 1998; Neisser & Becklen, 1975; Rock, Linnett, Grant, & Mack, 1992; Simons & Chabris, 1999), or the failure to detect unexpected, presumably noticeable events in the environment when performing an unrelated task, suggests that attention is limited and that drivers, by extension, might be less than able to notice changes in the world. For example, a driver distracted by responding to a text message may be less likely to notice changes on the road because the driver is attending to the phone and not the road, and is presumed to be unable to notice what they do not attend to. However, inattentional blindness relies significantly on the observer not knowing what the unexpected event is (hence, the reason it is unexpected), and this approach is thus of unclear relevance to driving, in which the driver presumably knows to stay in their lane and monitor potential hazards.

The related phenomenon of change blindness (Simons & Levin, 1997), in which observers fail to notice changes in a scene, has attracted some interest in driving research (Edwards, Caird, & Chisholm, 2008; Strayer, Drews, & Johnston, 2003; Zhao et al., 2014). However, abrupt changes, which are typical in change detection experiments, do not easily translate to the road environment. For example, a moose that startles you as you drive down the road does not actually appear out of nowhere, even if that is your subjective experience. Although brake lights do suddenly change in the real world, the change is to some degree expected, as opposed to many of the changes studied in vision science studies. In addition, these driving-focused change-detection studies did not separate out questions of attention and gaze location, which limits how informative they can be about what the driver is actually aware of.

Overall, inattentional blindness and change blindness suggest that asking subjects to perform a secondary task will impair performance on their primary task (e.g., if they are talking on their cell phone, they will fail to notice the unicycling clown who goes past them; Hyman, Boss, Wise, McKenzie, & Caggiano, 2010), although there is also evidence that not all simultaneous tasks compete with each other for the same resources (cf. Wickens, 2002). For that matter, work by VanRullen, Reddy, and Koch (2004) has shown that the ability to perform two simultaneous tasks depends greatly on the tasks; in some circumstances and with some tasks, observers can perform both tasks at the same level of performance as when performing each task in isolation. Therefore, it is crucial to distinguish between tasks that may incur a multitasking cost and those that do not, particularly in driving. For that matter, it is also unclear whether laboratory findings such as this will generalize to more driving-relevant tasks and environments, since more natural tasks (e.g., those using real-world stimuli) are often more robust to multitasking demands, since moving through the world demands that we multitask much of the time.

This, then, brings us to existing theories of peripheral decrements in performance in the driving literature. There are two prevailing models here: tunnel vision (Mackworth, 1965) and general interference (R. Young, 2012). Under a tunnel vision account—intimately linked with work on foveal load (cf. Chan & Courtney, 1998; Ikeda & Takeuchi, 1975; Rinalducci, Lassiter, MacArthur, Piersal, & Mitchell, 1989; Williams, 1985, 1989, 2009), and, implicitly, on cognitive load—focusing on a task at fixation is thought to greatly reduce the ability to detect more eccentric changes or targets. This idea has been operationalized in the driving literature as Martens and de Winter’s Peripheral Detection Task (Martens & van Winsum, 2000), in which a failure to promptly detect the irrelevant peripheral light is taken as evidence for attentional tunneling (an interpretation common to the discussion of the “useful field,” a point we have discussed at some length; Wolfe et al., 2017). In contrast, an interference account takes a lighter approach, suggesting that eccentric stimuli will become harder to detect as a function of demand, but that there is no hard cutoff between sensitive and insensitive regions of the visual field. Evidence for this account appears in a study by Ringer, Throneburg, Johnson, Kramer, and Loschky (2016), in which participants were asked to discriminate the orientation of brief, size-scaled Gabor stimuli while performing a foveal discrimination task. Their results showed a large reduction in peripheral discrimination performance when both tasks were performed simultaneously (i.e., general interference), with a smaller effect of eccentricity (i.e., tunnel vision) on this performance decrement.

Our goal in this study was to assess what drivers are capable of detecting in real driving environments while they perform an orthogonal task to induce different levels of cognitive load. Investigating this question would allow us to tease apart the respective roles of peripheral vision and attention, in a way that has not been done previously in the driving literature, while simultaneously informing us as to the capabilities of peripheral vision and its robustness to interference from cognitive load. However, it is not possible or safe to control fixation on-road; both Lamble et al. (1999) and Yoshitsugu et al. (2000) performed on-road studies in very controlled environments, not on city streets. Cooper et al. (2013) performed their experiments with a driving simulator, which provides control while greatly reducing realism. Instead, we performed our experiments using dashboard camera video recorded on a variety of roadways, presented on a large display in the lab. This allowed us to manipulate fixation location while using visual stimuli sourced from real-world road environments. We set our subjects a driving-relevant task—to detect brake lights in their lane of travel—that they did simultaneously with an easy or hard secondary task at fixation. This design allowed us to safely vary fixation location, the detectability of the brake light as a function of the brake light’s distance from the point of fixation, and cognitive load.

With our design, and on the basis of previous work, we believe that three outcomes are possible from an experiment involving an eccentrically presented, driving-relevant task in conjunction with an orthogonal task to manipulate cognitive load and control eye position. Broadly speaking, one might expect detection of brake lights in the periphery to be adversely impacted by the demands of a secondary task, a result that would align with a tunnel vision interpretation. However, it is also possible that detecting a driving-relevant stimulus, rather than an irrelevant one, might behave differently, with less impact from additional load, a result more in line with an interference account. A third option is that cognitive load, separated from the question of eye movements, could have no impact on the ability to detect driving-relevant peripheral changes, a result that would not be compatible with either a tunnel vision or an interference account of perception.

Experiment 1: Detecting brake lights in road video with a simultaneous secondary task

Materials and method

Subjects

A total of 37 subjects between the ages of 18 and 50 were recruited for this experiment from the greater Boston area through the MIT AgeLab’s subject pool. The data from seven subjects were discarded from the analysis; four of these for inability to complete the primary brake light detection task above chance in any condition, two for an inability to perform the secondary fixation task above chance in any condition, and one to retain gender balance in the final sample of 30. The data from 30 subjects were retained in the final sample (15 men, mean age = 30, SD = 9.3 years; 15 women, mean age = 30.8, SD = 10.1 years). All subjects had normal or corrected-to-normal vision and were assessed for acuity with the Federal Aviation Administration’s test for near acuity (Form 8500-1) and the Snellen eye chart for distance acuity. All subjects were naïve to the purpose of the experiment and were licensed drivers with at least 1 year of driving experience. All subjects provided written informed consent, as required by MIT’s Committee on the Use of Humans as Experimental Subjects (COUHES) and the Declaration of Helsinki. Subjects were compensated $40 for their time after completion of the study.

Apparatus

All stimuli were presented using Matlab (Mathworks, Natick, MA) and Psychtoolbox 3 (Brainard, 1997; Pelli, 1997) on a 46-in. Sony Bravia HDTV (102 × 57 cm panel size; 1,920 × 1,080 pixel resolution and 60-Hz refresh rate) at a viewing distance of 55 cm. The video subtended a large portion of the screen, to provide an immersive experience similar to driving a vehicle on the road. Head position was unconstrained, to approximate the experience of being in a vehicle, and the task was performed in a dimly lit (10-lux) room.

Video stimuli and annotation

The video clips were 78° wide and 44° high and were shown at the center of the display on a gray background. This video size was selected to approximate the field of view of the in-vehicle camera used to record the stimulus videos, and therefore provides an approximately 1:1 representation of the driving scene as the driver would have viewed it. The video clips were segments of a longer (169-min) road video recorded around Boston, Massachusetts, on a combination of highways and surface roadways (720p resolution and 29.97 frames per second). The original video was recorded from a centrally mounted camera, and the center of the video approximately matched the middle of the lane of travel. To segment the video into clips, the full video was first annotated by two observers for the appearance of brake light events in the vehicle’s lane of travel. Two skilled annotators viewed the video independently, marking the frame when they observed brake light onsets and offsets in the lane of travel. When they disagreed, these independent annotations were moderated by a third annotator who had not previously seen the video, to determine the correct frame for the indicated event. On average, the brake lights were 6.84° from the center of the video (SD = 4.4°), on the basis of spatial annotation of the brake onset frame in the video.

Next, 8-s video segments were automatically extracted from the full video to generate the stimuli for brake-present and brake-absent trials. Segments for brake-present trials were selected with the constraint that the video segment had to contain exactly one brake onset event, occurring between 2 and 5 s from the beginning of the clip, and that the video segment could not have any overlap with previous brake offsets. The mean brake light duration was 3.47 s, truncated by the end of the trial after 8 s. Although the lead vehicle would certainly slow as a result of the driver applying the brakes, our detection task was indexed to the onset of the brake light, rendering it the relevant signal for participants. Once the brake has been applied, the brake light becomes a constant element in the scene, rather than a change, and while still salient, is probably less likely to be detected as a consequence. Video segments for brake-absent trials contained no brake onsets or offsets. The video was presented without sound, to avoid any extraneous nonvisual cues.

Brake light detection task (primary task)

In the brake light detection task (the primary task in the experiment), subjects were instructed to maintain fixation where indicated and to press the space bar when they saw a vehicle in their lane of travel apply the brakes, as indicated by its brake lights. Subjects were instructed to ignore vehicles at extended distances, as they would not be relevant in an on-road driving situation. Subjects were instructed to use the illumination of the vehicle’s brake lights as their primary cue as to whether the vehicle ahead of them had engaged the brakes; however, since the stimuli were taken from on-road video in uncontrolled environments, other cues might have been present and usable by subjects. Except in the practice trials (see the Procedure section), no feedback was provided for the brake light detection task.

Cognitive load manipulation (secondary task)

To manipulate cognitive load, subjects performed a secondary task as a proxy for on-road tasks in which the driver’s attention would be directed away from the task of monitoring the forward roadway for brake lights. Subjects were instructed to fixate and respond to recurring targets on a green cross, 0.61° in height and width (line width: 0.17°). The cross was superimposed on the video at one of four locations (Fig. 1a): screen center, 30° directly to the left or right of screen center, or 20° directly below screen center. The left and right fixation locations were intended as modest deviations from fixating the forward roadway, not to approximate fully lateralized locations such as the left and right side mirrors. The bottom fixation location was directly above the dashboard and was selected to approximate the typical position of a windshield-mounted smartphone. Subjects completed two variants of the secondary task: immediate (0-back) and delayed (1-back) response. In the immediate-response or 0-back condition (Fig. 1b), one of the four arms of the cross was selected at random to change color from green to white (the fixation target). Targets were shown for 250 ms (and subsequently replaced by the standard green fill color), and subjects were instructed to immediately report which arm had changed color, by using one of the four arrow keys on the keyboard. Fixation targets recurred multiple times during the trial, and the onset of each target was selected from a uniform random interval between 1,250 and 2,000 ms from the onset of the previous target (or from the beginning of the video, for the first target). This variant served to enforce fixation while imposing a minimal additional cognitive load on subjects.

Fig. 1
figure 1

Experimental configuration and secondary task diagram. (a) Visualization of the display configuration and fixation locations used in Experiments 1 and 2. The circle in the middle of the road scene image is referred to in the text as the forward road location; the left circle is 30° to the left, the right circle is 30° to the right, and the smartphone-mount location is 20° below the forward road location. (b) Immediate-response secondary task used in Experiment 1: Subjects were instructed to report the arm that changed color with the corresponding arrow key (shown below the images) as soon as they perceived the change. (c) Delayed-response secondary task used in Experiment 1: Subjects withheld response on the first change in the fixation cross (0.61° high) and subsequently reported the arm that had changed previously, as shown below the images

The stimuli in the delayed-response or 1-back condition (Fig. 1c) were identical to those in the 0-back condition, and only the instructions changed between the two tasks. Subjects were instructed to withhold their response the first time an arm of the fixation cross changed color on a given trial. At the onset of each subsequent target, they used the arrow keys to report which arm had changed color previously. The delayed-response variant of our fixation task required subjects to hold the previously seen target in memory until the next one appeared, thereby imposing additional cognitive load as compared to the immediate-response condition.

Subjects were given feedback on their performance in the 0-back and 1-back fixation tasks. If they responded correctly within 1,000 ms, the fixation cross remained green. If they responded incorrectly or pressed an arrow key outside the 1,000-ms response interval, the entire fixation cross changed to black for 500 ms or until the next target appeared (whichever came first). To allow sufficient time to respond, no targets were shown in the last 1,000 ms of the video. On the basis of these timing constraints, each trial had between three and five changes in the fixation target.

Procedure

On each trial, subjects viewed the 8-s clip while fixating the indicated cross, performing both the brake detection task and the fixation task simultaneously. Accuracy on the two tasks was emphasized equally to subjects (however, we considered the brake detection task the primary task, and the fixation task the secondary task for the purposes of this article). Subjects used the arrow keys to report the changes in the fixation cross, and used the space bar to indicate when they saw a brake light in their lane of travel. Brake-present and brake-absent video clips were randomly interleaved and balanced across trials. At the end of the video, subjects were shown a gray screen with a fixation cross, and they initiated the next trial by pressing a key on the keyboard when ready to continue. The next video clip appeared after a 500-ms intertrial interval.

To acclimate subjects to the demands of the experiment design, they performed an extensive practice session prior to the start of data collection. In the practice trials, the fixation cross was maintained at the center of the screen (forward road position; see Fig. 1a) for ease of instruction. To ease subjects into the task, they first performed three trials doing the immediate-response fixation task as their sole task, ignoring the video entirely, followed by three trials of the delayed-response fixation task, again as their sole task. Subsequently, subjects practiced the brake light detection task alone (although they were instructed to fixate the fixation cross, no task was presented at the fixation location on these trials) for six trials. After completing these individual-condition practice trials, subjects practiced the fixation and brake light detection tasks together, completing 14 combined trials with the immediate-response secondary task and the brake light detection task and an additional 14 combined trials with the delayed-response secondary task and the brake light detection task. Subjects were given visual feedback for all fixation trials (as is described in the Secondary Task section above; incorrect or overly delayed responses were indicated by the fixation cross switching from green to black). Feedback was also provided for the brake light task by outlining the video frame in red when responses were incorrect or were delayed by more than 2 s. This feedback for brake light detection trials was only provided during practice trials, and not in the main experiment. Subjects performed a total of 40 practice trials prior to beginning the main experiment.

In the main experiment, subjects completed a total of 288 trials, blocked by fixation location (central roadway, 30° left, 30° right, and the smartphone-mount location, 20° down; see Fig. 1a) and secondary task (immediate and delayed response) into mini-blocks of 18 trials. Subjects completed one mini-block for each of the four fixation locations (in a random order), all with one of the secondary task conditions (e.g., immediate response), before switching to the other task condition (e.g., delayed response) and completing another four mini-blocks. This sequence was repeated to produce the full number of trials. Half the subjects began with the immediate-response condition, and the other half began with the delayed-response condition. The new fixation location and secondary task were indicated to subjects at the beginning of every mini-block, and subjects were encouraged to take breaks between blocks.

Analysis

The overall effects of fixation location and secondary task were tested with 4 (fixation location) × 2 (secondary task: immediate vs. delayed) repeated measures analyses of variance (ANOVAs) using R, version 3.5.0. We analyzed three variables in the brake detection task: brake light detection performance (proportions of correct responses across brake-present and brake-absent trials), reaction time for brake light detection, and proportions of missed brake light onsets (no response on brake-present trials). In addition, we analyzed performance on the fixation task (proportions of correct responses within the 1,000-ms response interval across all targets) with the same ANOVA design. We report effect sizes for main effects and interactions as partial eta-squared. Pairwise post-hoc tests comparing performance, reaction times, or miss rates between fixation locations were performed using Tukey’s HSD test (using the lsmeans package, version 2.27-61).

In addition, for each analysis, we report the corresponding Bayes factor (using the R package BayesFactor, version 0.9.12-2) of the alternative hypothesis (H1) against the null (H0), reported as BF10 and calculated using the Jeffrey–Zellner–Siow prior. For each main effect (e.g., task), we report BF10 for a model containing both main effects (e.g., task + fixation location) relative to a null model without the effect (e.g., fixation location only). For the interactions, the reported Bayes factor reflects the odds in favor of a model containing the interaction term and the main effects to a model without the interaction term (main effects only). Because it represents the ratio of probabilities, BF10 values below 1 indicate that the observed result is more likely under the null hypothesis (H0) than the alternative hypothesis (H1), with smaller values indicating greater evidence in favor of the null hypothesis. Values between 1/3 and 1 are generally considered to indicate anecdotal evidence in favor of the null hypothesis, and values below 1/3 indicate at least moderate evidence in favor of the null hypothesis (e.g., Wetzels, Matzke, Lee, Rouder, Iverson, and Wagenmakers, 2011).

Results

For brake light detection performance (Fig. 2a), we found a main effect of fixation location, F(3, 87) = 7.143, p < .001, ηp2 = .198, BF10 = 374.45, but no effect of secondary task—that is, no effect of attention or cognitive load, F(1, 29) = 0.825, p = .371, ηp2 = .027, BF10 = 0.242. We found that observers’ brake light detection performance was significantly better when fixating the center roadway than when fixating either the right fixation location (p = .016), or the smartphone-mount location (p < .001). Performance was also better when fixating the left fixation location than for the smartphone-mount location (p = .043). No other pairwise comparisons were significant (all ps > .29). We observed no interaction between fixation location and secondary task for brake light detection performance, F(3, 87) = 0.81, p = .493, ηp2 = .027, BF10 = 0.088.

Fig. 2
figure 2

Results, Experiment 1. (a) Mean brake light detection performance. The left bars for each fixation location show the immediate-response condition, and the right bars show the delayed-response condition. Note the decrement in performance between fixation locations. (b) Mean reaction times for brake light detection. Note the significant increase in reaction times between the forward road and all other fixation locations. (c) Proportions of missed brake light events. Note here, as well, the significant increase in miss rates for fixation locations other than the forward road location. (d) Mean secondary task performance. Note the significant differences between the immediate (0-back) and delayed (1-back) response conditions, and the lack of a difference between the different fixation locations. The dashed line at 25% denotes the expected performance level from random guesses for each target in the fixation task. Error bars in all plots show ± 1 standard error of the mean, and asterisks denote significant pairwise comparisons between fixation locations at the .05 level with Tukey’s honestly significant difference test

For reaction times (delay from brake light onset to subject response; Fig. 2b), we found a main effect of fixation location, F(3, 87) = 22.09, p < .0001, ηp2 = .432, BF10 = 2.11 × 1010, but no effect of secondary task, F(1, 29) = 0.09 p = .76, ηp2 = .003, BF10 = 0.14. We found significant pairwise comparisons between the center roadway fixation location and the left fixation location (p < .001), the right fixation location (p < .001), and the smartphone-mount location (p < .001); all reaction times were higher for noncentral locations than for the central location. No other pairwise comparisons were significant (all ps > .42). No interaction was observed between fixation location and secondary task for reaction times, F(3, 87) = 1.17, p = .3273, ηp2 = .038, BF10 = 0.154.

We also found a main effect of fixation location for miss rates (Fig. 2c), F(3, 87) = 13.52, p < .0001, ηp2 = .317, BF10 = 7,971, and no effect of secondary task, F(1, 29) = 1.67, p = .207, ηp2 = .054, BF10 = 0.70. As with reaction times, we found significant pairwise differences between the center roadway fixation location and the left fixation location (p = .013), the right fixation location (p < .001), and the smartphone-mount location (p < .001); all miss rates were higher for noncentral locations than for the central location. In addition, miss rates were significantly higher for the smartphone fixation location than for the left fixation location (p = .034). No significant differences were apparent between any other pair of locations for miss rates (all ps > .34). No interaction was found between fixation location and secondary task for miss rates, F(3, 87) = 0.31, p = .822, ηp2 = .017, BF10 = 0.058.

Observers were significantly worse at the difficult (delayed-response) secondary task than at the easy (immediate-response) secondary task, F(1, 29) = 32.66, p < .0001, ηp2 = .53, BF10 = 1.79 × 1026, indicating that the secondary task did manipulate cognitive load. There was no significant effect of fixation location on performance in the secondary task, F(3, 87) = 1.80, p = .15, ηp2 = .06, BF10 = 0.038, confirming that subjects maintained fixation when and where indicated (Fig. 2d). Note that performance in the delayed-response condition, although significantly reduced from performance in the immediate-response condition, is well above the performance level expected for random guesses at each fixation target onset (25%). No pairwise comparisons were significant (all ps > .24), and no interaction emerged between fixation location and secondary task, F(3, 87) = 0.406, p = .75, ηp2 = .014, BF10 = 0.05.

Discussion

To summarize, we found that while there was an effect of fixation location (and therefore, of the eccentricity of the brake light target) on brake light detection, there was no effect of cognitive load, as manipulated by secondary task difficulty. We observed a decrease in performance when peripheral vision was used, as expected. Larger and more important were the significant effects of fixation location on subjects’ reaction times to brake lights and on the number of brake light events that were missed entirely. Prior work (e.g., Lamble et al., 1999) has shown that drivers can detect brake lights in the periphery, but they accept closer vehicle separations (between the lead and chase vehicles) than they would when brake lights are more centrally located in the visual field. This indicates that in the periphery drivers are less aware of the reduced separation between vehicles and less able to stop in time. Our results fundamentally agree with these findings, in that we found that brake light detection performance was diminished as a function of their eccentricity. However, this prior work does not indicate how long drivers might take to detect a braking event when the braking vehicle is eccentric from fixation, which is critical for allowing the driver to stop the vehicle in a safe manner.

These reaction time penalties (on average, approximately 400 ms for the left, right, and smartphone fixation locations) will be dangerous if they translate to the road, since the delay in response would correspond to approximately 11 m of travel on a 100-kph roadway (roughly two car lengths). Subjects also missed significantly more brake light events when they looked away from their lane of travel, and they missed twice as many in the smartphone fixation location as when they looked at the central location. However, brake light detection was not perfect for any fixation location, a result attributable, we believe, to the variability inherent in the natural driving scenes we used (e.g., variable following distance between vehicles; different brake light configurations across vehicles; and variable illumination, therefore variable visibility, of brake lights). In fact, we would not expect drivers on the road to detect every brake light in their lane for much the same reasons.

Although we observed no effect of our cognitive-load manipulation on any measure of brake light detection performance, we note that performance across the two levels of the secondary task did vary significantly. The immediate-response condition was comparatively easy for subjects, and the level of performance observed can likely be attributed to motor errors in the response or failure to respond promptly. However, although the delayed-response condition was more challenging, as shown by the decrease in performance as compared to the immediate-response condition, this merely indicates that participants found it more difficult. We can infer that a more difficult task, as the delayed-response condition seemed to be, would impose a greater level of cognitive load, but it is difficult to describe in more than relative terms. This is particularly essential, because whereas the immediate-response task was easy for subjects and showed no impact on brake light detection or any other measure, this more difficult task, presumably imposing greater cognitive load, also had no impact on any of our measures. Although it might be possible to find an effect of cognitive load with this paradigm, this effect is dwarfed by the magnitude of the effect of fixation location. Given this surprising result, particularly in light of the literature on distraction, we performed a second experiment with a more difficult secondary task.

Experiment 2: Detecting brake lights with a simultaneous-orientation secondary task

In Experiment 1, we showed that perceptual factors (fixation location) had an effect on detection performance, reaction time, and miss rate, whereas we observed no effect of secondary task difficulty, and therefore no impact of cognitive load on our brake light detection task. Although we can reasonably expect that the subjects in Experiment 1 would have done their best to perform well on both tasks, the difficulty of the tasks and their timings may have allowed subjects to shift their attention between the two. Given our pattern of results in Experiment 1, our goal in using a novel secondary task in Experiment 2 was to attempt to replicate the pattern of results we had observed in Experiment 1 with a more difficult secondary task. Except where noted, the procedure and analyses for Experiment 2 were identical to those described for Experiment 1.

Method

Subjects

A total of 12 subjects (including two authors; eight men and four women, mean age = 29.1 years), none of whom had participated in Experiment 1, were recruited for this experiment from the MIT community. All subjects had normal or corrected-to-normal vision by self-report and, aside from the authors, were naïve to the purposes of the experiment. All subjects were also licensed drivers with at least one year of driving experience. All subjects provided written informed consent, as required by MIT’s Committee on the Use of Humans as Experimental Subjects and the Declaration of Helsinki. Subjects were compensated $20 for their time after completion of the study.

Apparatus

All stimuli were presented on a 55-in. LG OLED TV (120 × 67 cm panel size, 1,920 × 1,080 pixel resolution and 60-Hz refresh rate) at a viewing distance of 57 cm. The video subtended a large region of the screen (78° × 44°) to provide an immersive experience, akin to driving a vehicle on road and identical to the configuration used in Experiment 1, aside from the change in display. Head position was constrained by a chinrest to ensure a constant viewing distance, and the task was performed in a dimly lit room (~ 10 lux).

Brake light detection task

The brake light detection task was identical to that used in Experiment 1; subjects responded with the space bar as soon as they detected a brake light onset in their lane of travel. Exactly the same video segments were used in Experiments 1 and 2, and they were presented at the same size.

Secondary task

A new secondary (fixation) task was used to manipulate cognitive load in this experiment (Fig. 3). Subjects were shown sinusoidal gratings at the same fixation locations used in Experiment 1 (Fig. 1a). All gratings had a spatial frequency of 6 cycles/deg and were displayed at 100% contrast within a circular aperture (1.19° diameter). For most of the trial, subjects were shown a sinusoidal concentric grating (a bullseye pattern) at the fixation location. This grating was replaced by brief, recurring linear gratings (fixation targets), with the same duration (250 ms) and frequency (every 1,250–2,000 ms) as the targets in Experiment 1.

Fig. 3
figure 3

Secondary task diagram for Experiment 2. (a) Immediate-response (0-back) secondary task (oriented gratings) used in Experiment 2. Subjects were instructed to report whether the oriented grating (1.19° diameter) was tilted to the left or right of vertical, as indicated by the arrows below the gratings. (b) Delayed-response (1-back) secondary task for Experiment 2. Subjects withheld response on the first oriented grating and reported the orientation of subsequent gratings relative to each previous grating, as is shown below the second grating (counterclockwise or clockwise changes were indicated with the left and right arrow keys, respectively)

In the immediate-response (0-back) condition (Fig. 3a), target orientations were randomly selected to be either 15° to the left or to the right of vertical (0°). Subjects reported the direction of tilt with respect to vertical using the left and right arrow keys. In the delayed-response (1-back) condition (Fig. 3b), target orientations were selected from five orientations with respect to vertical (– 80°, – 40°, 0°, 40°, and 80°), with the constraint that the orientation of each target had to be 40° clockwise or counterclockwise relative to the preceding target. The orientation of the first target on each trial was randomly selected from these five orientations, and subjects were instructed to withhold their response. At the onset of each subsequent target, subjects reported its orientation relative to the previous grating. Subjects used the left and right arrow keys to indicate counterclockwise and clockwise changes, respectively. This task required subjects to maintain a representation of the grating in working memory to compare it to the orientation of the subsequent grating, rather than merely holding one of four options in working memory, as in Experiment 1.

As in Experiment 1, subjects were given feedback on their performance in both fixation tasks. If they responded correctly within 1,000 ms, they continued to see the concentric grating. If they responded incorrectly or pressed an arrow key outside the 1,000-ms response interval, the entire circular aperture changed to black for 500 ms. Between three and five changes in the fixation target occurred within each 8-s video trial.

Results

For brake light detection performance (Fig. 4a), we found a main effect of fixation location, F(3, 33) = 5.40, p = .004, ηp2 = .329, BF10 = 29.8, but no significant effect of secondary task with the new oriented-grating secondary tasks, F(1, 11) = 1.44, p = .256, ηp2 = .115, BF10 = 0.316, and no interaction between secondary task and fixation location, F(3, 33) = 0.669, p = .58, ηp2 = .06, BF10 = 0.25. Pairwise comparisons showed significant differences in brake light detection performance between the forward roadway location and the left fixation (p = .018), right fixation (p = .006), and smartphone-mount (p = .018) locations. In all cases, performance was better at the central fixation location. All other pairwise comparisons were not significant (ps > .97).

Fig. 4
figure 4

Results, Experiment 2. (a) Mean brake light detection performance. The left bars for each fixation location show the immediate-response condition, and the right bars show the delayed-response condition. (b) Mean reaction times for brake light detection. Note the significant increase in reaction times between the forward road and all other fixation locations. (c) Proportions of missed brake light events. Note here, as well, the significant increase in miss rates for all fixation locations other than the forward road location. (d) Mean secondary task performance. Note the significant differences between the immediate- and delayed-response conditions, and the lack of a difference between different fixation locations. The dashed line at 50% performance denotes the expected performance level from random guesses for each target. The error bars in all plots show ± 1 standard error of the mean, and asterisks denote significant pairwise comparisons between the fixation locations at the .05 level with Tukey’s honestly significant difference test

For reaction times to brake light events (Fig. 4b), we found a main effect of fixation location F(3, 33) = 22.2, p < .0001, ηp2 = .668, BF10 = 4.13 × 107, but no effect of secondary task, F(1, 11) = 1.09, p = .32, ηp2 = .09, BF10 = 0.46, and no interaction between fixation location and secondary task, F(3, 33) = 2.05, p = .13, ηp2 = .157, BF10 = 0.50. Pairwise comparisons showed significant differences in reaction times between the forward roadway location and the left fixation (p < .0001), right fixation (p = .0009), and smartphone-mount (p < .0001) locations. In addition, a significant difference in reaction times was found between the right fixation location and the smartphone fixation location (p = .004), but no difference was found between the left and right fixation locations (p = .68). We found a trending difference between the left fixation location and the smartphone fixation location (p = .06).

For missed brake light events (Fig. 4c), we found a main effect of fixation location, F(3, 33) = 9.65, p < .001, ηp2 = .467, BF10 = 26,126.99, but no effect of secondary task, F(1, 11) = 2.67, p = .13, ηp2 = .195, BF10 = 0.68, and no interaction between fixation location and secondary task, F(3, 33) = 1.3, p = .29, ηp2 = .107, BF10 = 0.262. We observed significant pairwise comparisons between the forward roadway and left fixation (p = .003), forward roadway and right fixation (p = .0004), and forward roadway and smartphone-mount (p = .0003) locations; in all cases, miss rates were higher at the noncentral fixation locations. No other pairwise comparisons were significant (ps > .82).

For the secondary task itself (Fig. 4d), we found a main effect of secondary task type (immediate vs. delayed response), F(1, 11) = 16.61, p = .002, ηp2 = .60 BF10 = 1.18 × 1013, and no effect of fixation location F(3, 33) = 0.44, p = .73, ηp2 = .04, BF10 = 0.065. No interaction emerged between secondary task and fixation location, F(3, 33) = 0.799, p = .50, ηp2 = .07, BF10 = 0.12. We found no significant pairwise comparisons between fixation locations (all ps > .69).

Discussion

The purpose of this experiment was to determine whether we could replicate the effect we had observed in Experiment 1 with a different, and likely more challenging, secondary task. We replicated all of the core findings, showing again that fixation location had significant effects on detection, reaction time, and miss rate, whereas secondary task difficulty (and therefore cognitive load) continued to have no effect on our results. Again, the effect of cognitive load on detecting brake lights was small in comparison to the effect of fixation location, which shifts the driving-relevant information in the scene to the periphery. Performance on the new secondary task suggests that the delayed-response task here was somewhat harder than the secondary task in Experiment 1. To enable comparison across our different secondary tasks in Experiments 1 and 2, we converted the results to d'. We found d' = 2.59 for the immediate-response task and d' = 1.72 for the delayed-response secondary task in Experiment 1, and d' = 2.10 for the immediate-response task and d' = 1.03 for the delayed-response secondary task in Experiment 2. These results suggest that both the immediate- and delayed-response tasks in Experiment 2 were somewhat more difficult than the tasks we had used in Experiment 1 (between experiments, differences in d' of 0.49 and 0.69 for the immediate and delayed tasks, respectively). Even with this different secondary task, we continued to observe no effect of secondary task difficulty, and therefore of cognitive load, on any of our measures. These results strongly suggest that perceptual changes, rather than the added load of a difficult secondary task, have a greater impact on the ability to detect brake lights in driving environments.

General discussion

Our goal in this study was to build on findings in both driving and vision research to disentangle the question of distraction and the inherent cognitive load of distracting tasks from that of changes in the eccentricity of driving-relevant information as a consequence of shifts in gaze location. When subjects performed both tasks simultaneously (detecting brake lights and responding to fixation targets), we found that fixation location—but not cognitive load—impacted their accuracy on the brake light detection task in both experiments. In fact, we found large increases in reaction time and miss rate when subjects were asked to detect eccentric brake lights, and these effects were minimally impacted by changes in the difficulty of the secondary task, even as they resulted in greater cognitive load for subjects. The absence of an effect of the secondary task on brake light detection performance was supported by a Bayes factor analysis, which indicated that the observed results were more likely under the null than under the alternative hypothesis (BF10s of 0.242 and 0.316 in Exps. 1 and 2, respectively). In our other measures, our reaction time data indicated substantial evidence for the null hypothesis in Experiment 1, and anecdotal evidence in Experiment 2 (BF10s of 0.14 and 0.46 in Exps. 1 and 2, respectively). In addition, our analysis of the proportions of brake lights events that were missed provided anecdotal evidence in favor of the null (BF10s of 0.70 and 0.68 in Exps. 1 and 2, respectively).

One possible interpretation of these results is that a statistically significant effect of cognitive load might have been observed with a larger sample of observers, a more challenging fixation task, or different dependent measures. However, we note that the emphasis of this study was on practical relevance rather than statistical significance. When observers responded to events within real-world driving scenes, the effects of cognitive load were much smaller than those of eccentricity, changing brake detection performance by only 1.3%, as compared to an effect of fixation location of 5.8% across experiments. Similarly, the effect of cognitive load increased reaction times by only 35 ms on average, as compared to 458 ms for the effect of fixation location. These values translate to a substantial difference in traveling distance at highway speeds (100 kph): 1 m versus 13 m. In other words, although we cannot definitively conclude that no effect of cognitive load exists, our results indicate that this effect was much smaller in our results than that of eccentricity. Furthermore, although it is possible that a more difficult cognitive load manipulation might produce a significant effect on peripheral brake detection performance, this would be inconsistent with our goals in these experiments. Rather than having observers perform an inordinately difficult task, we selected cognitive load manipulations that would be similar to the range of secondary task difficulties encountered during everyday driving (e.g., monitoring your smartphone’s GPS, changing radio stations). Similarly, we selected dependent measures (e.g., brake detection performance, reaction time) that are directly relevant to real-world driving situations.

Our findings accord well with existing work in driving research, where peripheral vision has been shown to be sufficient for pedestrian detection (Alberti, Horowitz, Bronstad, & Bowers, 2014) and, in some instances, brake light detection, even when drivers are engaged in secondary tasks (Lamble et al., 1999; Yoshitsugu et al., 2000). However, we should note that merely being sufficient for a task is not the same as being ideal, and drivers are substantially impaired in their ability to detect brake lights when they look away from the road in an on-road following task (Summala, Lamble, & Laakso, 1998). More broadly, our work on peripheral vision (Rosenholtz, 2016) supports the idea that peripheral information is an underappreciated determinant of performance in driving. On the basis of these results, we would suggest that drivers’ ability to acquire information from peripheral vision is robust to cognitive pressures and that the need to allocate attention to multiple locations (e.g., while distracted or performing multiple simultaneous tasks) may play a lesser role than has previously been thought.

Given our results, we suggest that many of the findings attributed to distraction (when drivers must take their eyes off the road to perform these tasks) may, in fact, be caused by changes in what drivers can perceive, rather than by the additional tasks they were asked to perform. It is worth noting that our distracting task was representative of only a subset of potential cognitive tasks carried out on-road (those that shift the point of gaze away from the forward roadway in conjunction with the task load itself). It is further worth noting that controlling fixation is rarely done in studies of driver distraction and of drivers’ ability to detect driving-relevant stimuli, for the simple reason that doing so on open roadways is not safe. When fixation location has been controlled, it has been done exclusively on closed roadways with only a single lead vehicle and no other traffic (Lamble et al., 1999; Yoshitsugu et al., 2000). By controlling fixation and using road video, we were able to address this gap in the literature: Using video of typical road environments, our results suggest that some distraction results may benefit from reinterpretation, bearing in mind how the driver’s view of the scene shifts as a consequence of these tasks.

However, it is essential to interpret these findings with caution: The fact that our subjects remained accurate and fast at detecting brake lights in the lab, even while performing a difficult and driving-irrelevant secondary task, does not mean that cognitive load has no effect on the road. Our results suggest that the increased eccentricity of relevant information is a larger contributor to subjects’ ability to detect brake lights in our task, but detecting brake lights in the laboratory is a far simpler task than controlling a vehicle on the road. We are not arguing that drivers can distract themselves with tasks that make them look away from the road and remain in control of their vehicles; if anything, our results indicate that relying on peripheral vision is inherently dangerous, because even a task as simple as detecting brake lights shows increased reaction times and miss rates in the laboratory when the braking vehicle is more eccentric.

Furthermore, it is important to distinguish between what a driver can detect and whether the driver can make an appropriate response. The implications of our results are simple: A distracted driver, in a task that pushes relevant elements of the scene to more eccentric positions, will have less time, and therefore less space, in which to stop safely before colliding with the rear of a vehicle ahead of them. To focus on one particularly dramatic example, consider the smartphone-mount location we used (20° down from the forward roadway fixation location). Even with this relatively small shift in gaze position (compared with the shift that would occur if the driver were to have their phone in their lap), we observed the largest reaction time penalty in this condition out of the noncentral locations tested. Across experiments, the mean reaction time difference between the smartphone-mount location and the forward roadway was 573 ms. This equates to 16 m of travel at 100 kph (50 feet at 60 mph), or several car lengths, and easily the difference between a safe stopping distance and a collision with a leading vehicle. We also found that subjects’ miss rates doubled at this fixation location. Using a phone mounted in this location, as a ridesharing driver might, would make them reliant on peripheral vision, and likely much less able to quickly and accurately respond to the environment. Whether this is, in fact, the case would require assessing this question either on-road or in simulation, and it could be the focus of future work, which might focus both on the operational impacts and on how to ameliorate them (e.g., with alternative mounting locations).

Our results also have implications for drivers of semi-automated vehicles. One can imagine a scenario in which the vehicle is driving itself and the driver is attending to their smartphone or the center console and is using peripheral vision to monitor the environment. Given the increased reaction time and miss rate we observed when subjects were aware of the tasks they needed to perform, we might expect a driver in this circumstance to be even slower to reassert control, even with an alert from the autonomous vehicle to cue them. As such, semi-autonomous driving interfaces that encourage drivers to keep their point of gaze on the forward roadway even when this is not operationally necessary may pay substantial safety dividends. Moreover, it presents a challenge to the idea that autonomous vehicles can free drivers to engage in secondary tasks, if the vehicle is relying upon them to monitor the roadway environment in autonomous mode in order to support transitions between vehicle and driver control.

Overall, our results show that, with the demanding secondary task we used, drivers’ ability to detect driving-relevant changes in the road environment was not impacted by cognitive load, but rather was impacted by the shift to relying on peripheral vision. Our results suggest that future work on distraction should consider whether the changes in behavior observed are a result of the task or of the shift in gaze position and the concomitant shift to peripheral vision for environmental monitoring. In addition, we would note that understanding peripheral vision in driving likely will require assessing driving-relevant stimuli in the periphery, and that irrelevant tasks (such as the peripheral detection task), although they do probe perception, do not probe it in a task-relevant manner, which limits their usefulness. Distraction and increased cognitive load cannot and should not be discounted, but understanding what the driver can perceive while distracted, on the basis of where the driver has to look, is essential for ameliorating the effects of distraction and multitasking on the road.

Author note

The authors thank an anonymous reviewer for a comment in the review process that led us to catch an error in the analysis we would otherwise have missed. Support for Experiment 1 was provided in part by the US Department of Transportation’s Region I New England University Transportation Center at MIT and by the Toyota Class Action Settlement Safety Research and Education Program. The views and conclusions expressed are those of the authors and have not been sponsored, approved, or endorsed by Toyota or the plaintiffs’ class counsel. Support for Experiment 2 was provided by the Toyota Research Institute/CSAIL Joint Research Center. B.W., B.D.S., A.K., B.R., and R.R. designed the experiment; B.W. and A.K. built the experiment; B.W. and B.D.S. collected the data; B.W., A.K., and B.D.S. analyzed the data; and B.W., B.D.S., A.K., B.R., and R.R. wrote the manuscript. These experiments were not preregistered, but the experimental code, stimuli, data, and analysis code are available on request from the corresponding author.