It is known that people are able to adjust their ongoing goal-directed movements if the position of a target changes (Brenner & Smeets, 1997; Goodale, Pélisson, & Prablanc, 1986; Pélisson, Prablanc, Goodale, & Jeannerod, 1986) or if the visual representation of their hand is perturbed (Brenner & Smeets, 2003; Sarlegna et al., 2003; Saunders & Knill, 2003). These adjustments are characterized by a smooth deviation of the hand trajectory in the direction of the perturbation if the target was perturbed (Oostwoud Wijdenes, Brenner, & Smeets, 2011), or in the opposite direction if the visual representation of the hand was perturbed (Sarlegna et al., 2003). These movement adjustments are studied in order to gain insights into the online control of movements. One aspect of interest in this context is the time that the system takes to initiate a movement adjustment: the response latency. For instance, it has been claimed that responses have a shorter latency for changes in target position than for changes in hand representation position (Sarlegna et al., 2003).

Determining the movement adjustment onset is similar to determining the movement initiation onset. However, as the hand is already moving when an adjustment starts, the between-trial variability in position and speed makes it more challenging to determine the response latency of movement adjustments than to determine the initiation of a movement. In both cases, the onset is generally detected only after the actual onset, because it is detected only when the response is clearly larger than the noise. This introduces the problem that the judged delay depends on the intensity of the movement adjustment: Responses in which the rate of change of the position (or velocity, or acceleration) is larger are detected earlier (Hu & Knill, 2011; Teasdale, Bard, Fleury, Young, & Proteau, 1993). Both the movement duration and the perturbation size affect this rate of change of the response. The response intensity increases when either the movement duration decreases or the size of the adjustment increases, or when both occur (Gritsenko, Yakovenko, & Kalaska, 2009; Oostwoud Wijdenes et al., 2011; Veerman, Brenner, & Smeets, 2008). We evaluated the influence of response intensity on different methods of determining the response latency of movement adjustments.

In the literature, one finds a variety of methods to determine the latency of movement adjustments. Such methods can be applied to position (Brière & Proteau, 2011; Johnson, Van Beers, & Haggard, 2002; Proteau, Roujoula, & Messier, 2009; Reichenbach, Bresciani, Peer, Bülthoff, & Thielscher, 2011; Reichenbach, Thielscher, Peer, Bülthoff, & Bresciani, 2009; Sarlegna et al., 2003), to velocity (Brenner & Smeets, 1997; Desmurget et al., 2004; Fautrelle, Barbieri, Ballay, & Bonnetblanc, 2011; Gritsenko et al., 2009; Leonard, Gritsenko, Ouckama, & Stapley, 2011; Soechting & Lacquaniti, 1983; Veerman et al., 2008), or to acceleration (Kadota & Gomi, 2010; Kerr & Lockwood, 1995; Oostwoud Wijdenes et al., 2011; Prablanc & Martin, 1992; Reynolds & Day, 2007) data. These kinematic variables can be analyzed in 3-D space, or the analysis can be restricted to the direction of the perturbation.

A subjective method that is used to determine the response latency is by means of visual inspection (Day & Lyon, 2000; Reynolds & Day, 2005, 2007). This method is frequently combined with an objective method. We will not discuss subjective methods here, but have previously argued that such methods can be replaced by a combination of objective methods (Schot, Brenner, & Smeets, 2010). A first type of objective method to determine the response latency is based on a threshold applied to a parameter of the movement itself. Some studies have used a fixed threshold to determine the response latency (Brière & Proteau, 2011; Proteau et al., 2009), others a threshold that is a percentage of the maximum value of the relevant parameter in each trial (Johnson et al., 2002; Reichenbach et al. 2011; Reichenbach et al. 2009).

A second type of method is based on a statistical analysis of the responses. To this end, authors have performed t tests (Fautrelle et al., 2011; Kadota & Gomi, 2010; Prablanc & Martin, 1992), one-tailed Mann–Whitney U tests (Brenner & Smeets, 1997), analyses of variance (Sarlegna et al., 2003), or one-way multivariate analyses of variance (Desmurget et al., 2004) to compare the perturbed and unperturbed movements, or two opposing perturbed movements, to determine the point at which the two started to differ significantly. Others have determined the variability in unperturbed movements and considered the response latency to be the first moment in time at which the adjusted movement deviated by more than one standard deviation (Leonard et al., 2011; Saunders & Knill, 2003), 1.5 standard deviations (Soechting & Lacquaniti, 1983), two standard deviations (Kerr & Lockwood, 1995; Reynolds & Day, 2007), or one standard error (Hu & Knill, 2011) from the mean of the unperturbed movement. Gritsenko et al. (2009) defined the response latency as the first point outside of a 95 % confidence interval that was determined over the first 100 ms of the response (i.e., before the adjustment had started). Reynolds and Day (2005) defined the latency as the first moment at which the 95 % confidence intervals of the control trials and perturbed trials did not overlap. Most authors have also included in their method a minimum duration for which the difference should be significant.

The last (third) type of method was introduced in our previous work (Oostwoud Wijdenes et al., 2011; Veerman et al., 2008). In this extrapolation method, we tried to address a problem that the first two categories of methods share: that the detection of the correction must always overestimate the latency. The idea behind the extrapolation method is first to identify an interval during which the response is unambiguously increasing and subsequently to extrapolate the response backward to find its onset. The moment in time at which this extrapolation crosses zero is considered to be the response latency.

If one wants to know how much time it takes to generate a movement adjustment in response to a target jump, the method for determining the latency should first and foremost accurately determine the latency: That is, the average determined latency should be close to the true latency. If one wants to compare response latencies—for instance, in order to determine whether the responses to changes in target position and the responses to changes in hand representation position have different latencies—the method must be able to reliably distinguish between the two latencies if they are different. In this case, you need to determine the latency precisely: You should be very certain about the determined latency (even if it is systematically different from the true latency). When analyzing real data, it is difficult to separate the noise from the signal, and it is therefore impossible to know the true latency. By simulating movements with noise added to them, we can know exactly when the movement adjustment started and can examine how this is retrieved in the face of added noise. To analyze the accuracy (i.e., how close the average determined latency is to the simulated latency) and the precision (i.e., how variable the determined latencies are) of the three different methods for determining the latencies of online movement adjustments, we simulated a data set with a given response latency and compared the accuracy and precision of the three types of methods for a variety of movement adjustments.

Method

Data simulation

The methods that we examined are aimed at determining the response latency of multidimensional movements. However, we only simulated the component of the movement in the direction of the perturbation, because usually only the component of movements in a single dimension is used to determine the response latency. In our simulation, the target was perturbed in a direction perpendicular to the main movement direction. Hand position in the direction of the perturbation was calculated for a set of control movements and for a set of movements with online adjustments using a minimum-jerk model (Flash & Henis, 1991; Flash & Hogan, 1985; Henis & Flash, 1995). By using the minimum-jerk model, we assumed that both the control movements and the movement adjustments were maximally smooth. This assumption seems to be justified by experimental data that have shown negligible lateral deviations of control movements if subjects move in the transverse plane (Gritsenko et al., 2009; Liu & Todorov, 2007), and by experimental data showing that the peak acceleration of movement adjustments is similar to that of maximally smooth adjustments (Oostwoud Wijdenes et al., 2011). We realize that a minimum-jerk model does not describe online movement corrections perfectly, especially at the end of the movement (Liu & Todorov, 2007). However, we think that for the purpose of this study it is an appropriate model to use, because the different methods mainly focus on the initial part of the correction, and for this part the minimum-jerk model seems to correspond quite accurately to human movement corrections.

There are several ways to implement a movement adjustment when using a minimum-jerk model. We used the abort–replan scheme (Flash & Henis, 1991; Henis & Flash, 1995) because it is the most straightforward way to model movement adjustments. We realize that the superposition scheme (Henis & Flash, 1995) or reformulating the model as an optimal feedback–control model (Liu & Todorov, 2007) might fit real data better in some circumstances, but our main aim was to understand the differences between different methods to determine the response latency, rather than to replicate certain movement adjustments as precisely as possible.

Since the predictions of a minimum-jerk model are independent for orthogonal directions, we could limit our simulations to the deviation of the hand in the direction of the perturbation. Note that due to the added variability that will be described below, the unperturbed movements had a component in the direction of the perturbation even though the perturbation was orthogonal to the main movement direction. We added variability by simulating movements with different durations and end positions (details in next paragraph). We assumed that subjects always responded with a latency of 100 ms to a change in target position (with variability in the magnitude of the response but not in its timing). The spatial resolution (0.01 mm) of the simulations was equal to that of the Optotrak and the sample frequency (500 Hz) was matched to commonly used settings.

The initial boundary conditions for the control movements were zero position, velocity, and acceleration. The position at the end of the movement was determined by a Gaussian distribution with mean μ = 0 cm and standard deviation σ = 0.65 cm (we only simulated the movement component in the direction of the perturbation). This value for σ was based on the standard deviation that we measured in an experiment in which subjects made horizontal reaching movements over 90 cm without a change in target position, whereby the average movement duration was 384 ms (Oostwoud Wijdenes, Brenner, & Smeets, 2013). These movements are amongst the longest and fastest that have been reported in the literature, factors that are known to lead to large variability (Fitts & Peterson, 1964). We used the standard deviation of these movements to approximate the upper limit of the possible variation in end position for this experimental paradigm. The final boundary conditions for velocity and acceleration were zero. We simulated movements with three different mean durations (300, 400, and 500 ms). For each of these mean durations, 100 movement times were drawn from a Gaussian distribution with σ = 30 ms (Fig. 1). This value of σ was also based on experimental data (Oostwoud Wijdenes et al., 2011). Velocity and acceleration profiles were obtained by (double) differentiating the simulated position trajectories. We filtered the acceleration with a second-order recursive bidirectional Butterworth filter at 50 Hz, as we have done with real data.

Fig. 1
figure 1

Examples of simulated movements and illustration of the different methods. The left column shows the position, velocity, and acceleration profiles for ten control trials and ten 2-cm perturbation trials with an average duration of 400 ms. The second column illustrates the threshold method for one of those trials. The third column illustrates the confidence interval method, and the fourth column the extrapolation method for the same trial. The vertical gray dotted lines indicate the simulated latency: 100 ms. The cyan asterisks indicate the latency as determined with each technique. This particular latency is detected too early in four cases (using the threshold and extrapolation methods on acceleration difference, and using the confidence interval method on both velocity and acceleration), and too late in the other five cases. Notice that the threshold and extrapolation methods are based on the difference between control trials and perturbation trials (orange curves). The confidence interval method compares a single perturbation trial (red curve) with the distribution of control trials (black curves show the average and gray areas the 95 % confidence intervals)

For the trials with adjustments, we started with unperturbed movements towards positions that were determined by a Gaussian distribution with mean μ = 0 cm and standard deviation σ = 0.65 cm. For the single-trial analyses, we generated three sets of 4,000 unperturbed movements that on average took 300, 400, or 500 ms, all with σ = 30 ms. For the perturbed movements, the original movement was aborted after 100 ms and replaced by an adjustment (Henis & Flash, 1995). The initial boundary conditions for the movement adjustments were the position, velocity, and acceleration of the original movements at that moment (i.e., after 100 ms). We assumed that the adjustment did not affect the total movement duration, which is congruent with experimental data for target jumps perpendicular to the movement direction (Blouin, Bridgeman, Teasdale, Bard, & Fleury, 1995; Oostwoud Wijdenes et al., 2011). The amplitude of the movement adjustments was determined by a Gaussian distribution with μ = 1, 2, 3, or 4 cm and σ = 0.72 cm, again corresponding to experimental data (Oostwoud Wijdenes et al., 2011).

For the single-trial analyses, we simulated 1,000 movements for each combination of the four perturbation sizes and three mean movement durations. For the average-trial analyses, we generated 1,000 sets of 20 movements for each combination of the four perturbation sizes and three mean movement durations. Movements were averaged across the 20 repetitions to obtain 1,000 means per combination. For these averages, we only considered positions for the duration of the shortest of the 20 responses in each set to ensure that all the trials were considered. This is relevant for methods that consider the maximum position of perturbed trials to determine the latency (the threshold and extrapolation methods described below).

Latency methods

We determined the latency of the movement adjustments in three different ways, each applied to the position, velocity, and acceleration of the simulated movements. For all methods, we computed the latency with respect to the control trials that had the same average movement duration. We chose the parameters for the different methods on the basis of values that we found in the literature. Due to the added variability, trials could deviate in the opposite direction than the perturbation. For all methods, we only considered deviations in the direction of the perturbation in order to determine the response latency.

The first method used a threshold relative to the peak difference in deviation between the movements in trials with an adjustment and the average of the control trials that had the same average movement time. The smallest threshold that we could find in the literature was a fixed threshold of 3 % of the target displacement (Proteau et al., 2009); others used a threshold of 20 % (Johnson et al., 2002) or 25 % (Reichenbach et al., 2011; Reichenbach et al., 2009) of the maximal displacement. The smaller the threshold, the less the latencies will systematically be overestimated, but also the higher the number of premature detections of responses. Considering this, we chose a threshold of 10 %. For single trials, the latency was defined as the first moment at which the deviation between the single trial and the average of the control trials that had the same average movement time was larger than 10 % of the maximal deviation between the two. For averaged trials, the latency was defined as the first moment at which the deviation between the average of a set of movement adjustment trials and the average of the control trials that had the same average movement time was larger than 10 % of the maximal deviation between those two averages. We will refer to this method as the threshold method (Fig. 1).

The second method was based on statistical analyses. The response latency was defined as the first moment at which the adjusted movement was outside the two-tailed 95 % confidence interval of the control trials. It might seem inconsistent to only consider responses in the direction of the perturbation and yet to apply two-tailed tests. In the literature, however, only Brenner and Smeets (1997) and Leonard et al. (2011) explicitly mentioned what they used—respectively, one-tailed and two-tailed tests. We assumed that in studies that did not explicitly mention the tailedness of their statistics, two-tailed tests were used (Desmurget et al., 2004; Fautrelle et al., 2011; Gritsenko et al., 2009; Kadota & Gomi, 2010; Kerr & Lockwood, 1995; Prablanc & Martin, 1992; Reynolds & Day, 2007; Soechting & Lacquaniti, 1983). Since it seems that the majority of studies have used two-tailed tests, we also did so. For the single-trial analyses, we computed the two-tailed 95 % confidence interval of the control trials for each average movement time and determined when each perturbed movement left this confidence interval in the direction of the perturbation. For the average-trial analyses, we determined the first moment in time when the distributions for sets of 20 perturbation trials and 100 control trials that had the same average movement time as the perturbation trials were significantly (p < .05) different with a two-tailed t test. If the perturbed movement distribution was not significantly different at any moment during the trial, we excluded this set from further analysis. We will refer to this method as the confidence interval method.

The third method (extrapolation method) was introduced by Veerman et al. (2008). This method is based on an extrapolation of a segment of the difference between the adjusted movement(s) and the control movements. For the single-trial analyses, this difference refers to the difference in deviation between the single trial and the average of the control trials that had the same average movement time. For the average-trial analyses, this difference refers to the difference between the average of a set of movement adjustment trials and the average of the control trials that had the same average movement time. The latency was determined by drawing a line through the points at which this difference reaches 25 % and 75 % of the peak difference in that trial or set of trials. The time between the perturbation and when this line crossed zero was considered to be the latency. If the extrapolation resulted in a negative latency—that is, before the start of the trial, we excluded this trial from further analysis.

Results

The confidence interval method could not identify a significant difference between the position of a single perturbation trial and the control trials in 1,578 of the 12,000 trials; for velocity this happened in 478 trials, and for acceleration in 117 trials. The extrapolation method applied to positions of single trials identified a negative latency for 2 trials. These trials were excluded from further analysis. For the threshold method and the extrapolation method applied to velocity and acceleration, no trials had to be excluded. Similarly, when comparing the averaged trials, no averages were excluded, irrespective of the method.

Figures 2 and 3 show the response latencies that were determined by applying the different methods to the position, velocity, and acceleration data of, respectively, the single- and average-trial responses. Ideally, the methods would determine the latency accurately (values close to 100 ms) and precisely (small standard deviations), with no dependence of latency on the correction intensity that resulted from the selected movement time and perturbation amplitude (all symbols aligned vertically). The extrapolation method applied to the average acceleration data best meets these criteria.

Fig. 2
figure 2

Latencies to perturbations of different sizes and different durations, determined on the basis of single trials as a function of the intensity of the response. The columns show the different methods for determining the latencies, and the rows show the kinematic variables used to determine the latency: from top to bottom, position, velocity, and acceleration. The error bars represent standard deviations. The simulated latency is 100 ms (dashed lines)

Fig. 3
figure 3

Latencies to perturbations of different sizes and different durations, determined on the basis of averaged trials as a function of the intensity of the response. Other details are as in Fig. 2

The threshold method applied to single-trial position data (Fig. 2, upper left panel) resulted in quite precisely determined latencies, but the latencies were influenced by the correction intensity and were not so accurate. When this method was applied to single-trial velocity data, the determined latencies were more accurate, but less precise and sensitive to differences in correction intensity. When it was applied to single-trial acceleration data, a substantial number of the responses were detected too early, especially for small corrections. As a consequence, the latency was not determined accurately or precisely, and there was a systematic effect of correction intensity on the latency.

When the confidence interval method was applied to single-trial position data (Fig. 2, upper middle panel), the determined latency was very sensitive to the correction intensity and not accurate or precise. Applying this method to single-trial velocity data resulted in a similar pattern. When it was applied to single-trial acceleration data, the determined latency was quite accurate, but not precise, and was influenced by the correction intensity.

When the extrapolation method was applied to single-trial position data (Fig. 2, upper right panel), the pattern of determined latencies was similar to the pattern obtained by the threshold method: quite precise but inaccurate latencies that were influenced by correction intensity. When the extrapolation method was applied to single-trial velocity data, the determined latencies were more accurate, but not so precise, and were influenced by the correction intensity. When this method was applied to single-trial acceleration data, the determined latencies were quite accurate and hardly influenced by correction intensity. However, they were not very precise.

Figure 3 shows the latencies that were determined when the methods were applied to averaged trials. The end positions of these simulations have the same mean and variability as the single trials in Figure 2. The maximum positions in the upper plots are nevertheless not horizontally aligned, because we determined the maximum position at the end of the movement with the shortest duration in the set. When the threshold method was applied to the average position data (Fig. 3, upper left panel), the latencies were determined very precisely, but not accurately, and they were influenced by the correction intensity. When this method was applied to average velocity data, the accuracy of the determined latency increased, but 1-cm corrections were determined less precisely and were influenced by correction intensity. Application of this method to the average acceleration data resulted in many latencies that were determined as being too early, and although this led to increased accuracy, the precision decreased. Moreover, the correction intensity influenced the determined latencies.

Applying the confidence interval method to average position data resulted in quite accurate but imprecisely determined latencies that were influenced by the correction intensity. When the method was applied to average velocity data, the latency was determined very accurately, but not precisely. The influence of correction intensity on the determined latency was small. The same method applied to average acceleration data resulted in most latencies being determined too early, and thus neither accurately nor precisely.

The extrapolation method applied to average position data resulted in precisely but not accurately determined latencies that were influenced by correction intensity. When this method was applied to average velocity data, the latency was determined quite precisely and the influence of correction intensity on the latency decreased, but the latency was not determined accurately. When this method was applied to average acceleration data, the determined latency was very accurate and was influenced very little by correction intensity. Except for the response with the lowest correction intensity (1-cm perturbation and 500-ms duration), the determined latency was also very precise.

Discussion

We set out to describe the accuracy of three commonly used methods of determining the latency of online movement adjustments, and to determine the influences of different response intensities, different kinematic variables, and single-trial or averaged data analyses on the determined latencies. The extrapolation method resulted in the most accurate and precise response latencies, especially when it was applied to acceleration data. The influence of response intensity on the determined latency was also marginal for this method. Note, with respect to precision, that Figures 2 and 3 compare judgments for single trials with judgments for means of 20 trials. If the consideration is whether to determine the latencies for individual trials and average the latencies, or to average the responses and then determine the latencies, the lengths of the error bars in Figure 2 should be divided by about 4.5 (\( \sqrt{20 } \)). This reduces the differences, but overall, first averaging still seems to be the best choice.

Obviously, the parameters that we chose for the different methods have a substantial influence on the accuracy of the determined latencies. Our choice was based on values that we found in the literature. It would be possible to adjust the percentages for the threshold method and the number of standard deviations for the confidence interval method, to optimize the determined latencies. Increasing the threshold or the number of standard deviations (or standard errors) would result in fewer early detections, and thus a larger systematic overestimations of the latency. Moreover, one could argue that different kinematic variables should have different thresholds because differentiation decreases the signal-to-noise ratio. However, fine-tuning parameters is only possible for this artificial dataset for which we know the actual latency. For a real data set, the response latency is unknown, which makes it impossible to fine-tune the parameters in a meaningful manner. We therefore chose one set of conventional parameters to illustrate the behavior of the different methods in terms of accuracy, precision, and the influence of correction intensity. For the extrapolation method, we used the values that had been used in the first study to use this method (Veerman et al., 2008).

The extrapolation method assumes that the difference between the movement adjustment and the control movement initially increases more or less linearly. This assumption seems to be supported by experimental data (Oostwoud Wijdenes et al., 2011; Veerman et al., 2008). If the correction intensity is low—for instance, because the size of the correction is small and the movement duration is long—this linear part will be short and noisy, so the method’s precision will deteriorate (see the error bar for the 1-cm/500-ms condition in the bottom right panel of Fig. 3). In other cases, taking too large a part of the increasing difference curve will reduce precision, because nonlinear parts of the response would be treated as if they were linear. Taking too small a part will make the method more sensitive to noise.

The range of response speeds that we simulated was limited. The fastest simulated corrections were 4 cm in 200 ms, and some studies have reported faster responses—for example, 13-cm corrections in 500 ms (Prablanc & Martin, 1992), or 14-cm corrections in 474 ms (Brenner & Smeets, 2004). However, in general, the methods are more accurate for faster responses, so we do not think that this limits our conclusions. The slowest corrections that we simulated were 1 cm in 400 ms, and this resembles the slowest corrections that we could find in the literature: 1.5 cm in 650 ms (Brière & Proteau, 2011; Heath, Hodges, Chua, & Elliott, 1998).

The number of movements that we simulated is much larger than the standard sample size that one would measure in an experiment. The expected standard error for the mean latency in a specific experiment can be determined by dividing the standard deviation that is plotted in Figure 2 by the square root of the sample size. For example, the standard error for 20 measured trials with an average correction of 1 cm in 500 ms (red asterisk in the middle right plot of Fig. 2) would be expected to be about 13 ms (the SD is about 60 ms). This means that you would expect to find a significant overestimate of the latency (the mean is at about 130 ms) using this method.

Whether single- or averaged-trial analyses will result in more accurate responses depends on the method used. The threshold method applied to single-trial acceleration data resulted in a marked systematic error, whereas it was more accurate when applied to average acceleration data. However, the accuracies were comparable for single- and average-trial analyses when considering position or velocity data. The confidence interval method was more accurate when it was applied to average trials rather than single trials for position and velocity, but not for acceleration. The accuracy of the extrapolation method was slightly better for the average-trial analyses.

In our simulations, we did not vary the response latency. One difference that we have therefore not considered between analyzing individual trials and analyzing averages of trials is that an analysis based on averages of trials will often be biased toward the shortest latencies, rather than providing information about the average of the individual latencies within the set of trials. This is particularly evident for the threshold and extrapolation methods, because the average position (or velocity or acceleration) of the perturbed movements starts to differ from the control trials as soon as the first of the 20 trials in the set starts to deviate. Whether this bias is an advantage or a disadvantage is not evident, and probably depends on the question that one is trying to answer. To our knowledge, the true shape of the probability distribution of response latencies is unknown. We speculate that it is a highly skewed distribution with a long tail toward longer latencies. The mean latency is therefore likely to exaggerate the time needed to respond. Perhaps taking the median response latency of individual trials would be the safest option, but as we have seen, this could be less reliable, so it might not be the best option if one knows that the latency is not very variable, or if one is interested in the minimal response latency.

We only considered some sources of noise in our simulated movements: behavioral variability (in end position and movement time) and limited measurement precision (the sampling frequency and the spatial resolution). Obviously, in real data more sources of noise are present. As a consequence, over the time course of the movements, our data are less variable than a real data set would be. However, we see no reason to expect fundamentally different performance for variability that arises from other sources.

We manipulated the signal-to-noise ratio by varying the response intensity and keeping the variability in movement duration and end position the same. As is shown in Figures 2 and 3, this results in differences in the determined latencies between the different response intensities. Adjustments that are executed faster are determined more accurately because they reach the threshold sooner, and thus can be distinguished from the noise sooner and better. If one were to manipulate the signal-to-noise ratio by increasing the size of the noise instead of the size of the signal, we would expect similar effects.

On average, the maximum positions that we used undershot the targets to a greater degree for shorter movement times (Fig. 3). This is also a result of the way that we manipulated the signal-to-noise ratio: We used the same simulated variability in movement time for all movement durations, making the relative variability larger in movements that ended sooner. The undershoot arose because positions were considered only until the end of the shortest movement in the set. Making the variability proportional to the movement duration would equate the undershoots across movement times, but there is no reason to expect that this would fundamentally change our results. The undershoot increased with the movement amplitude, because amplitude and duration were varied independently, so the time-related undershoot simply scaled with movement amplitude.

Determining movement onset accurately is challenging (Teasdale et al., 1993). Determining the onset of a movement adjustment accurately is even more challenging. We showed that for our simulated movement adjustments, the extrapolation method applied to averaged acceleration data resulted in the most accurately and precisely determined response latencies, and that latencies determined in this manner were least susceptible to systematic biases related to the intensity of the response.