Online reach adjustments induced by real-time movement sonification ☆

Movement sonification can improve motor control in both healthy subjects (e.g., learning or refining a sport skill) and those with sensorimotor deficits (e.g., stroke patients and deafferented individuals). It is not known whether improved motor control and learning from movement sonification are driven by feedback-based real-time ( “ online ” ) trajectory adjustments, adjustments to internal models over multiple trials, or both. We searched for evidence of online trajectory adjustments (muscle twitches) in response to movement sonification feedback by comparing the kinematics and error of reaches made with online (i.e., real-time) and terminal sonification feedback. We found that reaches made with online feedback were significantly more jerky than reaches made with terminal feedback, indicating increased muscle twitching (i.e., online trajectory adjustment). Using a between-subject design, we found that online feedback was associated with improved motor learning of a reach path and target over terminal feedback; however, using a within-subjects design, we found that switching participants who had learned with online sonification feedback to terminal feedback was associated with a decrease in error. Thus, our results suggest that, with our task and sonification, movement sonification leads to online trajectory adjustments which improve internal models over multiple trials, but which themselves are not helpful online corrections.

Motor control is facilitated by inverse internal models which generate feedforward motor commands and forward internal models which simulate sensory (i.e., afference) feedback (Ishikawa, Tomatsu, Izawa, & Kakei, 2016;Kawato, 1999;Miall & Wolpert, 1996).In manual movements such as reaching, they are used both to plan motor trajectories before movement onset and to make real-time corrections (Gaveau et al., 2014).Feedback-based (i.e., "closed-loop") control is also observed in human movement.Movements, such as reaches, often require real-time feedback-based trajectory adjustments, i.e., online trajectory adjustments.The brain can use visual and proprioceptive feedback of the relative position of the hand and seen target object (Gaveau et al., 2014;Gomi, 2008) and binaural auditory feedback when reaching towards sound sources (Boyer et al., 2013), speeding up or improving accuracy (Sarlegna & Mutha, 2015).Sensory feedback is also presumably the main (or only) error signal for correcting internal models, i.e., for making diachronic (inter-movement) internal model adjustments.
How does acoustic feedback from movement sonification devices enhanced motor control and learning?Such feedback has been shown to have effects on both movement kinematics and spatial accuracy, including in manual tasks such as reaches (Boyer, Bevilacqua, Susini, & Hanneton, 2017;Danna et al., 2015;Fehse et al., 2020;Liu et al., 2022;Matinfar et al., 2023;Rosati, Oscari, Spagnol, Avanzini, & Masiero, 2012).Presumably the effects from sonification feedback are driven by diachronic internal model adjustments.A question which has received less attention is whether sonification feedback can also prompt online trajectory adjustments.This study addresses this question.
If movement sonification can function as sensory replacement or augmentation affording perception of the body via acoustic feedback similar to how proprioception and vision afford such perception, then it would be expected that (like vision and proprioception) it can drive online trajectory adjustments.In some studies, effects from movement sonification are observed very quickly, within just one or two repetitions of the movement.For example, O'Brien et al. (2020) found that playing a squeak sound in real time during negative torque on pedal upstroke while cycling lowered negative torque after only one or two pedal strokes.However, the rapid emergence of trajectory adjustments from real-time movement sonification does not, in itself, show that the adjustments are realtime responses to the acoustic feedback.Especially in a repetitive movement such as pedaling, the auditory feedback from one movement repetition (e.g., one pedal stroke) might merely prompt a change for the next movement repetition instead of being integrated as an online correction of the current action.That is, the feedback might merely be driving diachronic internal model adjustments.
How can online trajectory adjustments be distinguished from diachronic internal model adjustments?The former would not be possible if the feedback were terminal, i.e., delivered after the movement is completed.So, we attempted to rule out diachronic internal model adjustments as the sole source of sonification effect by comparing movements made with real-time sonification (online feedback) and movements for which the same sonification was played 500 ms after completion of the reach (terminal feedback).Specifically, we looked for differences consistent with the presence of online trajectory adjustments.
As our sonified movement, we chose reaches towards invisible targets along an invisible specified path in which both the target location and the path to that target were localized exclusively by sonification.We chose reaches because reaches and other manual tracking tasks have been studied extensively, both in the presence of sonification and more generally.These studies sometimes constrain the movement, e.g., having participants "reach" by dragging a stylus across a 2D tablet (Fehse et al., 2020), "track" by manipulating a joystick (Oscari, Secoli, Avanzini, Rosati, & Reinkensmeyer, 2012), or "point" while manipulating a robotic arm (Ruttle, 't Hart, & Henriques, 2021).However, following other studies (Boyer et al., 2013;Nikmaram et al., 2019;Schmitz et al., 2014;Scholz et al., 2016), we had participants freely make reaches through 3D space without any physical constraints.We felt unconstrained reaches would be more likely to elicit online trajectory adjustments.
Movements have both topokinetic and morphokinetic aspects (Mullis, 2008;Paillard, 1991).Success for typical reaches and pointing movements depends only on "hitting" a location in space, and hence is fully topokinetic.However, in our task the path to the target is relevant as well for success, thereby incorporating gestural aspects to the movement.Success depends on "traveling" through certain body-segment configurations via the right set of motor commands.This morphokinetic aspect to our task movement might also be more likely to elicit online adjustments from online feedback compared to a fully topokinetic reach.
What differences might be expected between reaches made with online and terminal feedback, if the former prompts online trajectory adjustments?Any trajectory adjustment would necessarily involve a change in muscle twitching and such a change would be reflected in a change in force on the arm.By Newton's second law (F = ma), this change in force would mean a change in acceleration.Online trajectory adjustments would also be expected to affect reach accuracy, either positively or negatively.Hence, if online feedback is used to make online trajectory adjustments, then there should be more jerk (the derivative of acceleration) in the presence of online feedback as compared to terminal feedback and there should also be some difference in reach error between the two types of feedback.
Work has already been done on error and jerk effects from movement sonification in unconstrained reaches.Boyer et al. (2013) found no difference in accuracy when pointing towards an invisible sound source with hand position simulated as a binaural sound source and Rosati et al. (2012) found no effect on accuracy from online sonification feedback in joystick and tablet-based tracking tasks.However, as Schmitz et al. (2014) note, other studies, including Oscari et al. (2012), have found accuracy benefits.Even so, the lack of comparison to a terminal feedback condition in these studies leaves open that effects were due not to online trajectory adjustments, but instead were due to diachronic internal model adjustments.Indeed, it is the latter which both Schmitz et al. (2014) and Oscari et al. (2012) infer from the results.While presumably there were diachronic internal model adjustments in these studies, Boyer et al. (2013) also found more acceleration peaks during the deceleration phase associated with the online feedback.This increase in the number of acceleration peaks plausibly reflects the jerk increase expected from online trajectory adjustments.
It is possible that feedback from some, but not all, kinds of movement sonification prompts online trajectory adjustments.Thus, an important question concerns what kind of sonification to use.It seems doubtful that a results-oriented approach (e.g., a beep indicating the target had been hit) would prompt online adjustments.So, we considered performance-oriented approaches which could provide users real-time information about both their path and target error.
Following Boyer et al. (2013), one of the most popular strategies would be to exploit stereophonic spatial information in binaural sound (Dubus & Bresin, 2013).However, as Boyer and colleagues noted, this approach to sonification faces two challenges which might explain why it did not improve accuracy in their pointing task.First, users must rely on memory, as it is difficult to present both target and arm position concurrently.Second, given the poor spatial resolution of binaural hearing, proprioception likely affords better spatial information about hand position than the binaural sonification (Grantham, Hornsby, & Erpenbeck, 2003;van Beers, Sittig, & Denier van der Gon, 1998), negating any potential advantage.
A third widely used strategy is an "error" sonification (Boyer et al., 2017;Matinfar et al., 2023;Oscari et al., 2012;Rosati et al., 2012;Schmitz & Bock, 2014).Error sonifications convert some error parameter, such as distance from target, into an auditory signal.We adopt this approach, sonifying instantaneous distance to the target as variations in pitch and distance off the intended reach path as variations in volume.This sonification does present users with an unfamiliar auditory encoding of spatial information, so users must (at an implicit, "unconscious" level) learn how the auditory feedback covaries with their movement.However, pilot data suggested this sonification enabled users to reach an invisible target along an invisible path with approximately half the error expected by chance within 50 trials (https://doi.org/10.31234/osf.io/4ax8n).
We chose an error sonification over binaural and ecological ones because we felt it would be more apt to prompt online trajectory adjustments.Binaural sonifications face the two challenges noted above: (1) nonconcurrent target and arm feedback and (2) poor spatial resolution not offering advantages over proprioception.The error sonification overcomes the first challenge by combining information about the target, path, and arm position into the same auditory signal.It meets the second challenge by compressing 3D spatial information into lower-dimensional 2D frequency and power responses which audition can decode with millisecond-scale temporal resolution and low (30-70 ms) latency (Fitzgibbons, 1983(Fitzgibbons, , 1984;;Green, 1971;Howard et al., 2000;Nourski, 2017).It's reasonable to assume that this compression conveys more spatial information useful for trajectory control than either proprioception or a binaural sonification.Further, this compression of 3D spatial information into lower-dimensional acoustic features naturally decoded by audition also has the potential to convey more and better spatial information than any familiar ecological sonification.For example, while pitch-to-height mappings are familiar, they only convey one dimension of spatial data and it is not obvious how to extend them to 3D space.
Other approaches are also possible.For example, Fehse et al. (2020) constructed an auditory movement space by sonifying hand position so that position along a left-right axis was mapped to stereo balance and position along a vertical axis was mapped to pitch.While it is impossible to survey all such possible approaches, the concurrent target-path-arm feedback, spatially rich, and lowdimensional encoding of the error sonification seemed most likely to produce a high correlation between auditory responses and motor responses, thus potentially leading to auditory-motor entrainment at the neural level (Crasta, Thaut, Anderson, Davies, & Gavin, 2018;Merchant, Grahn, Trainor, Rohrmeier, & Fitch, 2015;Thaut, McIntosh, & Hoemberg, 2015) which would, in turn, produce online trajectory adjustments.
In this study, we looked for evidence of online trajectory adjustments from sonification feedback by comparing the kinematics (jerk) and error of morphokinetic reaches made with online and terminal sonification feedback.We hypothesized that an online error sonification would prompt online trajectory adjustments in a morphokinetic reaching task.Specifically, we hypothesized that reaches made using online sonification feedback would be associated with less error relative to a model reach and more jerk than those reaches made with only terminal feedback.

Participants
Ninety participants from the York University Undergraduate Research Participant Pool were recruited.All participants gave written informed consent and received course credit.The study was approved by the York University Human Participants Review Sub-Committee (certificate #e2022-058) and was in accordance with the principles of the Declaration of Helsinki.Of the 90 participants, 20 did not complete the study, leaving 70 from whom we collected and report data here (24 self-reported men, 46 women; Age: mean = 21.44 yr, min = 18 yr, max = 44 yr, std.dev = 4.72 yr).Of these 20, 12 were unable to return reliably to the same initial position within the tolerances of the sonification system.Returning to the same initial position over the 100 total trials was nontrivial, as slight rotation of the arm or a slight change in posture could lead the sonification system to no longer recognize the position.In addition, incidental movement of the system's sensors relative to a participant's arm could also prevent the system from recognizing the initial position.Seven participants failed a screening during the demo meant to check that they were able to reliably discriminate task-relevant differences in the sonification feedback.Technical difficulties during collection meant one participant could not finish the task.

Equipment and Sonification
The sonification was generated by a bespoke unit built around an ESP32 feather (Adafruit HUZZAH32, ID 3591) and TDK InvenSense ICM-20948 inertial sensors (SparkFun breakouts ID 15335).The unit was programmed in Arduino (v2.1.0)and is described fully in a preprint (https://doi.org/10.31234/osf.io/4ax8n).It was designed to be and verified to have < 2 ms latency and to sample motion and update sound at 1 kHz.It was run at these settings for online feedback.Reaches were recorded using both the inertial sensors on the arm (see Fig. 1) from the sonification unit measuring rotation in quaternions (data collected via Serial by Decisive Tactics, v2.0.15) and using an optical motion tracking system (OptiTrack V100:R2 cameras) measuring linear position in Cartesian space (data collected via Tracking Tools by Natural Point software, v2.3.4).The sonification unit used one inertial sensor on the forearm and one on the upper arm, each returning an independent quaternion encoding rotation from initial position.These two quaternions were left in unaligned coordinates.The sonification feedback was played through Sony MDR-ZX110 headphones.
The sonification itself was given by the following equations, for pitch p and power (i.e., "volume") v. ) Here p max = 6000 Hz and v max = 511, meaning maximum power output was divided into 512 levels.The system was set so that maximum power was 70 dB.For each motion sample the inputs were distance-to-target d t and distance-off-path d h , where the latter was the distance between the current position and the nearest time-warped model position, as determined by a real-time time-warping algorithm used by the sonification unit.(The model is described in the procedure section below.)In both cases, distance was defined as the sum of the Euclidean distances (forearm sensor plus upper arm sensor) between the current quaternion readings and the model quaternion readings.The constants a p and a v were set as follows, with initial pitch p i = 440 Hz and i t being the distance to the target when in the initial position.
The full code for the sonification unit, including for computing the sonification and the real-time time-warping, is available online (https://osf.io/pvwfm/).This link also makes available video demonstrations of the sonification so that typical examples of sonified reaches can be heard and watched.
The net effect of this sonification is that pitch increases exponentially (from 440 Hz to a max of 6000 Hz) as the distance to the target decreases and volume decreases (from 70 dB to 0 dB) exponentially as the distance off path to target increases.Thus, a successful reach was heard as rising pitch and constant high volume, while a poor reach was heard as stagnating (or falling) pitch and falling volume.
We selected 70 dB as the max power level to minimize the risk of exposing participants to dangerous sound levels.We selected 440 Hz (note A4) as the initial pitch level as it is reasonably pleasant and still leaves room for pitch to drop if participants initiate their reach in the opposite direction of the target.We used exponential functions for both the pitch and volume curves (instead of a linear function) to account for the Weber-Fechner law.We set maximum pitch at 6000 Hz because this was a value that we anticipated most participants would find tolerable yet was still relatively high.Because the concept of our sonification was to compress spatial information (distance from target) into temporal information (frequency, or pitch) we wanted to maximize the pitch range within tolerable limits, as psychoacoustic studies show that auditory temporal resolution improves as pitch increases, at least as measured by temporal gap resolution (Fitzgibbons, 1983).The 440-6000 Hz range in conjunction with the exponential function had a further unexpected benefit.

Fig. 2.
Trial and session structure.All participants performed 100 reaches (one reach per trial), broken into no-feedback (NoFeedback-Block), feedback (Main-Block), and switched feedback (Switch-Block) reaches.The 100 reaches were performed without breaks between blocks.Trials were self-paced.Participants began in the initial position (IP).The sonification system indicated that it was ready and that it recognized the IP by playing a half-power (volume) 440 Hz constant tone (the IP tone).Upon hearing the IP tone, participants reached when ready, paused briefly at the end of their reach, and then returned to the IP.In this figure and all subsequent ones, the terminal condition is labelled in red, the online condition in blue.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) We informally noticed that this range and function generally led to a large portion of the feedback on any one trial falling within 1-4 kHz, a range for which the auditory system is particularly good at pitch encoding and also is most sensitive to low-power sounds (Carlile, 2011).Finally, we had the system update the acoustic feedback at 1 kHz with 1 Hz pitch resolution to take full advantage of auditory temporal and spectral resolution.At 1 kHz pitch, just detectable difference in pitch is 2-3 Hz, while for a narrowband sound like ours, sensitivity to temporal modulation is greatest in the 100-200 Hz update range and is present up to 1 kHz updates (Carlile, 2011).

Procedure
To compare jerk and error between online and terminal feedback, we used a dual within-subjects, between-subjects design.All participants made 100 self-paced reaches while seated and blindfolded: first 25 reaches without feedback (NoFeedback-Block), next 50 with one kind of feedback (Main-Block), and a final 25 for which the feedback was switched (Switch-Block).Participants were divided into conditions by their learning feedback, i.e., the feedback type (online or terminal) which they would be given in Main-Block and would use to learn their model reach.Participants in the online condition received online feedback during Main-Block and terminal feedback during Switch-Block, while participants in the terminal condition received terminal feedback during Main-Block and online feedback during Switch-Block (Fig. 2).Half of the 70 participants were assigned to the online condition (13 self-reported men, 22 women; Age: mean = 22.23 yr, min = 18 yr, max = 37 yr, std.dev = 4.97 yr) and half were assigned to the terminal condition (11 self-reported men, 24 women; Age: mean = 20.66 yr, min = 18 yr, max = 44 yr, std.dev = 4.39 yr).A demo (about 20 reaches) was given before the 100-trial task to familiarize participants with the sonification and to check their understanding of the task.

Model selection
During NoFeedback-Block, participants were instructed to reach to "random" places.For each participant, the target and specified path were defined by pseudo-randomly selecting one of their first 21 NoFeedback-Block reaches as a model.We excluded the last four of NoFeedback-Block as possible models to lessen the odds participants would remember the model implicitly and repeat it by chance early in Main-Block.The full set of recorded model samples became the specified path and the final model sample became the target.The selected model was kept constant throughout Main-Block and Switch-Block.In both online and terminal conditions, for all reaches in Main-Block and Switch-Block, 500 ms after the feedback the model was played back under the same sonification (Fig. 2).This allowed participants to complete the task (reach for the target along the intended path) by attempting to find the reach which best matched the model playback.In the demo, participants were told that rising pitch indicated they were approaching the target and that falling volume indicated they were going off path and that the task could also be completed by matching the sound of their reach to the model playback.(See Supplemental Material, Section 2, for complete session script.)

Data processing
The sonification unit estimated the start and end point of each reach in real time and saved data as segmented reaches.(This estimation was also required to begin and end the sonification feedback at the correct times.)OptiTrack returned continuous measurements.A preprocessing algorithm was written to synchronize the sonification unit and OptiTrack recordings and to segment the OptiTrack data into reaches based on the sonification unit data (see Supplemental Material, Section 3).
The preprocessing algorithm also checked for short (< 450 ms), overrun (> 1750 ms), and floating reaches (initial position > 15 cm from the model's initial position).Short and floating reaches were removed, as they happened when the sonification unit failed to estimate the start and end points accurately.Overruns happened when participants failed to pause long enough at the end of their reach (returning to the initial position in a smooth motion) for the sonification unit to detect the end point.The preprocessing algorithm estimated the true end point of the overrun as the minimum velocity point after 500 ms.Of 7000 reaches recorded, 6888 (98.4%) remained after shorts and floats were removed.As examples, segmented reaches recorded by OptiTrack are shown in Fig. 3.
Once the data were synchronized and segmented into reaches, we found both jerk and error for each reach (Fig. 4).We measured both linear motion of the wrist through exocentric space (defined by the Cartesian coordinates in Fig. 1) with OptiTrack (spatial jerk Fig. 4. Visual representation of spatial error and spatial jerk.The orange dotted path is the model reach.The red dotted path is a reach made during the main block with terminal feedback.Spatial target error is the black line at the top.It is the Euclidean distance between the final points of the two reaches.The gray lines labelled d h are the distances off path, for fourteen example points.Spatial path error is the mean of these distances.The three thick black arrows labelled J x , J y , and J z are directional jerk, i.e., jerk along each spatial axis, for one select point in the reach.The cyan arrow is the total jerk vector for that point.Note that real data for one reach is shown here, except that the jerk arrows are shown out of true proportion so that all three can be seen.Note that rotational error and jerk are defined analogously, but harder to visualize, as they involve summing two 4D vectors.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)and spatial error) and rotational motion of the arm with the sonification unit (rotational jerk and rotational error).For error, we measured both target error, i.e., Euclidean distance from the target at the end of the reach, and path error, i.e., mean Euclidean distance from model after time-warp adjustments (R package dtw, v1.23-1, default settings) to the reach based on the velocity curves of the reach and model (Müller, 2007).For rotational error (both target and path), separate distances were computed for each of the two sensors in the sonification unit and summed to find a total distance.
To quantify spatial and rotational jerk at each time step for each reach (with both sonification unit and OptiTrack data), we computed directional jerk magnitude (Fig. 4).To compute this, we first took the third derivative of position along each recording axis (i.e., x, y, and z for spatial jerk and the eight total quaternion axes of the two sensors for rotational jerk), then found the Euclidean magnitude of the third derivative vector at each time step.Finally, a jerk number was found for each reach by taking the mean of the directional jerk magnitude across all time steps.The first and last 200 ms of each reach were cut off the recordings before taking this mean.These cuts were made for practical measurement reasons.Specifically, much of the first and last 200 ms are not genuine movement, but instead are "jitter" as the system (both human arm and sensors) settles into a new state (motion or rest).This can be demonstrated by comparing the time cut vs the distance cut off the reach paths.Mean reach distance after the cut (across all reaches) was 0.82 (std.dev= 0.11) of total distance.In contrast, mean reach time after the cut (across all reaches) was 0.63 (std.dev= 0.10) of total time.Further, the start and stop of each reach is difficult to measure accurately, owing both to uncertainty over the "true" start and stop time, and to increased noise.A final consideration is that online adjustments are presumably not even possible in the first 200 ms (Gaveau et al., 2014).
We computed directional jerk (jerk along each spatial or quaternion axis) before taking a magnitude to ensure jerk would reflect trajectory adjustments.Mathematically, there can be changes in jerk along an axis that are offset by proportional changes in jerk along another axis.While at any one moment our error sonification carries only information about error magnitude and none about the direction of error (either distance off path or distance from target), a longer duration of the feedback (with changing pitch and volume) when combined with proprioceptive information about arm trajectory or a pre-planned motor routine giving that trajectory would contain potentially decodable information about error direction.For example, falling pitch while reaching towards the right signals the target is towards the left.Even if the brain is unable to learn to extract this information in real time, twitches in response to the feedback would presumably be directional anyway (i.e., not a proportional change in velocity across all axes), if nothing else due to motor noise.
For both jerk (both spatial and rotational) and error (all four kinds), we normalized each reach's value to facilitate comparison between the spatial and rotational data, which we expected to be similar.To normalize jerk, for each participant we first found the mean J m of the reach jerk values for all reaches from that participant in NoFeedback-Block.Then, for each reach r with jerk J(r) we computed the normalized jerk as: To normalize error, for each participant we first found the mean E m of the reach error values for all reaches from that participant in NoFeedback-Block (excluding the model itself).Then, for each reach r with error E(r) we computed the normalized error as: Normalized reach error and normalized reach jerk were our dependent variables for our statistical analysis (making for six total variables, normalized spatial and rotational jerk, normalized spatial and rotational target error, and normalized spatial and rotational path error).
For practical reasons we analyzed both rotational data from the sonification system and spatial data from the optical system.First, given that the rotational data was the basis for segmenting the spatial data and was the data used to produce the sonification feedback in real time, it seems natural to analyze it.Second, the spatial recording is worth analyzing as it provides a more intuitive and easily understandable picture.Third and most important, the two sets of recordings were used both to calibrate optimal filter settings off one another and to check each other by throwing out reaches for which there was high disagreement between recordings.Millisecond-scale jerk measurements are extremely difficult to make owing to noise.Throwing out one of the two recordings would remove this important basis for filtering and lead to more arbitrary (or even biased) and less trustworthy data.We expand on this third point next.

Outlier removal and data filtering
Before conducting our statistical analysis, we removed both extreme outlier reaches and reaches with high spatial and rotational disagreement in jerk or error.A reach was identified and removed as an extreme outlier if, for the participant making the reach, for any of the six dependent variables, it was three times the interquartile range above the third quartile or below the first quartile.A reach was identified and removed as having high spatial and rotational disagreement if, for any of the six dependent variables, the difference between the normalized values was > 0.5 (i.e., greater than fifty percentage-points).For example, if OptiTrack gave a normalized error value of − 0.9 for a reach (90% below mean error during NoFeedback-Block) and the sonification system gave a value of − 0.1 (10% below), then there would be a difference of 0.8 between the systems, >0.5.There were 300 outlier reaches (4.3%) and 235 highdisagreement reaches (3.4%).After removal of extreme outliers and reaches with high disagreement, along with the removal of reaches which did not record properly (shorts and floats, 112 total), 6353 reaches remained (90.76% of the original 7000).
While 9.2% of reaches is a lot of data to remove, the high rate of data removal came down to two factors: (1) the task itself, being morphokinetic, was more difficult than a typical purely topokinetic reach task, and (2) accurate millisecond-scale jerk measurements are difficult to make.Neither of these factors is likely to introduce bias into the data.Further, the removal of high-disagreement reaches should improve the quality of the data by removing measurements which are untrustworthy.Before computing jerk, filtering of the raw position data was required, as noise washes out the signal by the third derivative.A lowpass Butterworth filter (order 9, cutoff 10 Hz) was used along each axis of the position data (R package signal, v1.8-0).Filter settings were selected through hand-tuning by finding those settings which balanced minimizing the difference between normalized spatial and rotational jerk with minimizing reach loss to outliers and system disagreement.We reasoned that because OptiTrack and the sonification unit have different noise sources and are approximately tracking the same signal (linear jerk at the wrist vs rotational jerk of the arm), disagreement between them indicated noise.Mean difference between normalized spatial and rotational jerk measurements, after removing outliers and reaches with high disagreement, was 0.06 (i.e., six percentage points).For example, if OptiTrack measured a normalized jerk of 0.9 for a reach and the sonification system's measurement was the mean (0.06) away, then the sonification system's measurement for that reach would be 0.96 or 0.84.Mean difference between spatial and rotational measurements for target and path error was, respectively, 0.1 and 0.12.(See Supplemental Material section 6 for a full discussion of jerk processing and section 7 for a discussion of outlier removal.)

Linear mixed-effects modelling
The independent variables in our statistical analysis were our between-subjects condition, learning feedback, and our withinsubjects factor, block.For block, we were only interested in jerk and error (i) once participants had learned the model and (ii) during Switch-Block.Thus, for each of the four types of error (spatial target, spatial path, rotational target, and rotational path), we fit an exponential decay function (EDF, Equation SE3) to normalized error improvement, computed for each reach r in Main-Block similar to normalized error (Eq.6), except with the Equation: For each participant, the fitted EDF served as a model of target and path learning.Participants were counted as saturating once their improvement hit 95% of their fitted EDF asymptote.Saturation trials differed between the four error types, so we defined a saturation block (Saturation-Block) within Main-Block by taking the maximum of the spatial target and rotational target saturation trials.So, Saturation-Block was defined as the saturation trial and all subsequent Main-Block reaches before the feedback switch.Thus, both learning feedback and block are two-level categorical variables, with learning feedback having levels online and terminal, and block having levels Saturation-Block and Switch-Block.
Some participants failed to reach saturation in one or both spatial and rotational target error and some participants who saturated did so very late in Main-Block, leaving only a few reaches in Saturation-Block.To ensure that all participants had a well-defined saturation trial and a Saturation-Block with an adequate number of reaches for a meaningful mean, if the saturation trial was > 66 or either saturation trial was Inf (i.e., the participant failed to saturate), then 66 was used as the trial starting Saturation-Block for that participant.Similarly, if a participant happened by chance to find the model on one of the first few Main-Block reaches or had a saturation trial < 35 for some other reason, then 35 was used as the trial starting Saturation-Block for that participant.This was to ensure that some time had passed for motor learning to occur.
Preliminary examination of the data showed that it was not normally distributed and contained many outliers (> 1.5 times the IQR above the third quartile or below the first quartile) and so we could not use parametric tests such as ANOVA.Thus, we chose to use linear mixed-effects models (LMMs) with nonparametric bootstrapping to test for fixed effects from our independent variables (R package lme4, v1.1-35.1).
We set terminal feedback as our reference level for learning feedback, and Saturation-Block as our reference level for block.So, online feedback and Switch-Block were the treatment levels, respectively, for learning feedback and block.Anticipating an interaction between learning feedback and block, we used the following equation for our models: where DV = dependent variable, BK = block, LFB = learning feedback, and P = participant.This model predicts DV of a reach r as: where β Intercept is the fixed-effect intercept, β Sw(Tm) is the fixed-effect of the switch on the terminal condition, β On(Sat) is the fixed-effect of the online feedback in Saturation-Block, β InterX is the fixed-effect interaction, and u Intercept (P(r) ) and u Sw(Tm) (P(r) ) are the random-effect intercept and slope for participant P(r).Further, BK(r) is 0 if r is in Saturation-Block and 1 if r is in Switch-Block while LFB(r) is 0 if r is in the terminal condition and 1 if r is in the online condition.As to interpreting these fixed effects, because the means of the random effects u Intercept and u Sw(Tm) are approximately zero, the fixed effects represent the effect of the independent variable before taking individual participant variation into account.For example, β Intercept is the expected value of DV in Saturation-Block with terminal feedback, before taking individual differences into account, akin to how a population mean is normally interpreted.In turn, β Sw(Tm) is the expected amount (possibly negative) one would add to β Intercept M. Barkasi et al. to account for the effect of the switch on the terminal condition, while β On(Sat) is the expected amount (possibly negative) one would add to β Intercept to account for the effect of online feedback in Saturation-Block.Similarly, we can infer that β Sw(On) = β Sw(Tm) + β InterX is the expected amount (possibly negative) one would add to β Intercept + β On(Sat) to account for the effect of the switch on the online feedback.

Statistical analysis
We used nonparametric bootstrapping with 10 k resamples to estimate 95% confidence levels for fixed effects.Bootstrap resamples were stratified by BK * LFB.We did not use a parametric method, e.g., in conjunction with the Satterthwaite approximation, because the residuals of our models were neither normal nor t-distributed.The main issue was fat tails, as even after removing extreme outlier reaches there were many outliers (see Supplemental Material, section 8).We used a Bonferroni correction to our alpha level (α = 0.05).We had three regression coefficients (learning feedback, block, and participant) with six dependent variables.However, because the spatial and rotational variables are highly correlated (Pearson correlation of 0.965 for normalized jerk, 0.949 for normalized target error, and 0.91 for normalized path error), we counted only three dependent variables plus the three regression coefficients.Thus, our corrected alpha level was 0.05/(3 × 3) = 0.05/9 = 0.0056.We counted a result as significant only if it was significant in both the spatial and rotational data, and to the lower of the two significance levels.Note that although we only factored three dependent variables into our Bonferroni correction despite testing six, counting a result as significant only if it was significant in both spatial and rotational data entails that our false positive rate cannot be higher than it would be if we only analyzed the three variables from one of the two systems.
Our choice to use learning feedback as a factor captures our mixed, within and between-subjects experimental design.By testing the effect of the block treatment (Switch-Block) within a learning feedback type, we are testing the within-participant effect of the switch from Saturation-Block to Switch-Block.By testing the effect of the learning feedback treatment (online) within Saturation-Block, we are testing the between-participant effect of the online feedback (vs terminal feedback).

Results
For normalized jerk (both rotational and spatial), we found significant fixed effects consistent with our prediction that reaches made with online feedback have more jerk than those made with terminal feedback.All fixed effects estimated by LMMs and bootstrapping are given in Table 1, while the distribution of raw data is shown in Fig. 5.These fixed effects imply that, before taking individual differences into account, we can expect participants learning with online feedback in Main-Block to have a normalized rotational jerk value in Saturation-Block that is 0.153 higher than participants learning with terminal feedback and a normalized spatial jerk value that is 0.109 higher.This means that learning with online feedback raises rotational jerk during Saturation-Block by

Table 1
Fixed Effects of Block and Learning Feedback on Jerk and Error.Rotational values are shown on the left, spatial values on the right in parentheses.BS.Estimate is the mean of the fixed-effects values from the LMM fittings to the bootstrapped data sets.CI shown are for 95%, after the Bonferroni correction of α = 0.0056.Likewise, all p-values are multiplied by 9 for the same correction.Intercept is the estimated fixed-effect value (roughly, the mean) of the terminal condition in Saturation-Block.Sw (Tm) is the effect of the Switch on the terminal condition.On (Sat) is the effect of the online feedback in Saturation-Block, relative to terminal feedback.InterX is the interaction between Block and Learning Feedback.Sw (On) is the effect of the switch on the online feedback and is calculated as Sw (Tm) + InterX.An effect is interpreted as significant only if it's significant for both spatial and rotational data and the lower of the two significance levels is used.Significance levels: ns = ≥ 0.05, * = < 0.05, * * = < 0.01, * * * = < 0.001.The left column shows rotational measurements, the right column shows spatial measurements.The top row shows jerk, middle row target error, and bottom row shows path error.Data is sorted by nesting learning feedback within block.For the jerk measurements (top row), notice that in Saturation-Block distribution for the reaches from the online learning feedback participants (blue box) is shifted slightly up compared to the distribution for the reaches from terminal learning feedback participants (red box).This reflects how learning with online feedback increased jerk in Saturation-Block.For Switch-Block, the opposite happens, reflecting how participants who learned with online feedback become less jerky when switched to terminal feedback and participants who learned with terminal feedback become more jerky when switched to online feedback.For target error (middle row) and path error (bottom row), notice that while the bottom two quartiles are comparable across learning feedback conditions in both blocks, the top two quartiles for online learning feedback are lower than the top two quartiles for terminal learning feedback.This suggests that while learning with online feedback did not make the most accurate reaches anymore accurate than learning with terminal feedback, it did make the least accurate reaches more accurate.An effect is interpreted as significant only if it's significant for both spatial and rotational data and the lower of the two significance levels is used.Significance levels: ns = ≥ 0.05, * = < 0.05, * * = < 0.01, * * * = < 0.001.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) 15.3% of baseline (i.e., of mean rotational jerk during NoFeedback-Block) compared to terminal feedback and raises spatial jerk by 10.9%.Further, the fixed effects imply that the switch to online feedback for those participants who learned in Main-Block with terminal feedback increases normalized rotational jerk by 0.173 (17.3% of baseline more in Switch-Block than in Saturation-Block) and increases normalized spatial jerk by 0.141 (14.1% of baseline more in Switch-Block than in Saturation-Block).Finally, the fixed effects imply that the switch to terminal feedback for those participants who learned in Main-Block with online feedback decreases both normalized rotational and spatial jerk by 0.214 (21.4% of baseline less in Switch-Block than in Saturation-Block).
For normalized target error, we replicated our pilot data results, showing that participants (learning with either online or terminal feedback) significantly reduce target error by 45-59% of baseline (see confidence intervals for Intercept and On (Sat) in Table 1).As to the fixed effect of the online feedback, consistent with our prediction we found that before taking individual differences into account, we can expect participants learning with online feedback in Main-Block to have a normalized rotational target error value in Saturation-Block that is 0.034 lower than participants learning with terminal feedback and a spatial target error value that is 0.054 lower.This means that learning with online feedback lowers rotational target error during Saturation-Block by 3.4% of baseline (i.e., of mean rotational target error during NoFeedback-Block) more than terminal feedbac and spatial target error by 5.4%.We found similar fixed effects of the online feedback in Saturation-Block for path error, with a decrease of 0.028 for spatial path error and a decrease of 0.04 for rotational path error.
While these fixed effects of online feedback in Saturation-Block are consistent with our prediction that online feedback lowers  For the jerk measurements (top row), notice that the blue line (online learning feedback) is higher than the red in Main-Block, while both blue and red lines cross immediately at the switch (trial 75).This represents the sudden change in jerk at the switch, those participants who received online feedback in Main-Block becoming much less jerky in Switch-Block and vice versa for those participants who received terminal feedback in Main-Block.While the effects for target error (middle row) and path error (bottom row) are less dramatic, observe that the blue line (online learning feedback) is lower (i.e., less error) than the red line (terminal learning feedback) in both Main-Block and after the switch, and also that the blue line continues to decrease even after the switch.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

Main
error, we found the opposite at the switch.Before taking individual differences into account, we can expect participants learning with online feedback in Main-Block to have a normalized rotational target error value in Switch-Block that is 0.049 lower than the their value in Saturation-Block.For spatial target error, this decrease is 0.047.Similar decreases were found for path error, with the decrease being 0.043 for spatial path error and 0.055 for rotational path error.There was not a significant fixed effect of the switch on those participants learning in Main-Block with terminal feedback.There are a few qualitative features of the data worth noting.First, by looking at trial-by-trial means for jerk measurements (Fig. 6, top row) we can see that participants learning with online feedback begin to increase in jerk above those learning with terminal feedback after about only 10 reaches.These trial-by-trial means also show that the effect of the switch on jerk is immediate (or nearly so), with jerk levels flipping within one or two reaches.Further, after the switch, jerk levels remain relatively stable, without participants regressing to their pre-switch jerk levels.The bootstrapping and LMM statistical estimates suggest that the effect of the online feedback on error is much smaller than on jerk, and this can be seen in these trial-by-trial means plots.However, even in these plots, error by participants learning with online feedback is generally lower than by those learning with terminal feedback.Further, the continued improvement (decreasing error) of the participants who learned with online feedback after the switch is also visible.Finally, note that while the effects of the online feedback on jerk are fairly clear in the distributions of the reach measurements (Fig. 5, top row), the effects on error (middle and bottom row) are more subtle.The bottom two quartiles (the top-half most accurate reaches) are comparable across learning feedback conditions in both Saturation-Block and Switch-Block, however the top two quartiles (the bottom-half most accurate reaches) are not.Here the distribution falls slightly lower for reaches made by participants who learned with online feedback.What this suggests is that while learning with online feedback did not make the most accurate reaches anymore accurate, it did make the least accurate reaches more accurate.

Discussion
We found kinematic evidence supporting our hypothesis that auditory feedback from an online (i.e., real-time) error-based movement sonification can be used not only for diachronic internal model adjustments, but also for online trajectory adjustments.We sonified a morphokinetic gestural reach, without physical constraints through 3D space, to an invisible target along an invisible path localized only by sonification.Between-subjects, we found a significant increase in jerk from learning with online feedback compared to learning with terminal feedback.Within subjects, we found that after learning a reach with our movement sonification, a switch in feedback type either from online to terminal or vice versa had a significant effect on jerk consistent with online trajectory adjustments.That is, switching to terminal feedback decreased jerk and switching to online feedback increased it.As can be seen in Fig. 6, the effect on jerk from the switch in feedback type is nearly immediate and sustained through the entire switch block.
As to whether these online trajectory adjustments decreased error, we found mixed evidence.We found a significant decrease in both target and path error from learning with online feedback, compared to learning with terminal feedback.However, the switch to terminal feedback (for participants learning with online feedback) did not increase error, as would be expected if the online feedback was inducing online adjustments which corrected for error.In fact, the switch to terminal feedback had the significant effect of decreasing error for participants who had learned with online feedback.The switch to online feedback (for participants learning with terminal feedback) had no significant effect on error.

The role of online adjustments
There are two questions about the effects observed for error.First, why did the switch to online feedback not reduce error for participants who had learned with terminal feedback?Second, why did the switch to terminal feedback reduce error for participants who learned with online feedback?We believe these questions have important implications for understanding the role of online trajectory adjustments in the effects of movement sonification on motor learning.
The first question likely has a straightforward answer.Even if the online feedback contains information which could be used to make error-reducing online adjustments, participants who learned their model reach with terminal feedback likely did not learn to use this information for online corrections.Instead, perhaps the online feedback merely prompted additional random, unhelpful twitches.This would explain why the switch increased jerk for these participants but did not decrease error.
The second question is more difficult.A possible explanation is that while the online adjustments did improve diachronic internal model adjustments (thus, leading to less error at saturation in Main-Block), they were not successful as online trajectory adjustments.
Here the idea would be that there is in fact online twitching in response to the online feedback (as suggested by the kinematics), but this twitching is either random or systematic in a way that doesn't tend to decrease error, on any one reach sometimes decreasing error and sometimes increasing it.The net effect of these random or systematic twitches might be more error on a given reach than there would have been without the feedback-induced twitches.This would explain why removing the feedback with the switch decreased error.
However, while on any one reach the feedback-induced twitching might increase error, over many reaches the feedback-induced twitching might still lead to better learning of the model reach and thus less error at saturation.This might be because information in twitch-feedback correlations on a reach, while not usable for a real-time correction on that reach, is nonetheless usable for a correction on the next reach (i.e., for diachronic internal model adjustment).This hypothesis could be tested in future work by simulating random feedback-induced twitching in a hidden-state Bayesian model (the hidden state being the model reach, the measurements for Bayesian update being the sonification feedback).Such random twitching could not improve online control, but may improve the pre-planned motor trajectory.

Sensory augmentation
Some researchers take a perspective on movement sonification according to which the feedback is a mere temporal stabilizing signal, e.g., providing a rhythm to follow.This perspective takes the effects of movement sonification to be on par with the way music guides dance (Giomi, 2020) or the way Rhythmic Auditory Therapy stabilizes gait in stroke rehabilitation (Thaut & Abiru, 2010).For example, there is recent work developing sonification systems aimed at improving gait asymmetries in both sport and rehab contexts (Reh et al., 2019;Rushton & Kantan, 2022) and work exploring sonification's effects on bimanual rhythmic timing (Dyer et al., 2017b).Even when going beyond rhythmic tasks, researchers often remain focused on movement sonification's potential effects on timing and nondirectional kinematic variables such as velocity magnitude.For example, Danna et al. (2015) studied the effects of sonifying velocity magnitude on handwriting fluency, while Boyer, Bevilacqua, Guigon, Hanneton, and Roby-Brami (2020) show that sonification of velocity magnitude affects movement velocity in ellipse drawing.This focus on timing is understandable (Danna & Velay, 2015), as successful motor control obviously depends on precise timing and audition is the sense with the best temporal resolution and lowest response latency (Freides, 1974;Grahn, 2012;Stauffer, Haldemann, Troche, & Rammsayer, 2012).
While evidence, including from auditory-motor neural entrainment, shows that movement sonification can provide a stabilizing temporal signal (Crasta et al., 2018;Koshimori et al., 2019;Merchant et al., 2015;Morillon & Baillet, 2017;Thaut, Tian, & Azimi-Sadjadi, 1998, 2015), emerging evidence suggests that sonification feedback can also augment or replace spatial feedback from vision and proprioception (Boyer et al., 2017;Danna & Velay, 2017;Effenberg et al., 2016;Ghai, Schmitz, Hwang, & Effenberg, 2018;Ley-Flores et al., 2022).This evidence suggests that movement sonification can be a form of sensory substitution or sensory augmentation (Hu et al., 2020;Merabet et al., 2009;Tommasini et al., 2022), affording spatial information usable for the control of goal-directed bodily movements.The idea here is that while auditory feedback is usually a source of temporal information and not a dominant source of spatial information, spatial information of a sort usually conveyed by vision or proprioception can be artificially conveyed through acoustic feedback via a movement sonification system.Some of this evidence specifically supports the idea that movement sonification can replace or augment trajectory information normally carried by vision or proprioception.For example, in pointing tasks, perturbations (i.e., systematic distortions) in sonification feedback induce the same kind of motor adaptations observed in response to perturbations in natural visual feedback and can even transfer to visual feedback (Schmitz & Bock, 2014).In addition, it's been found that auditory feedback encoding spatial tracking error can improve performance on visuo-manual tracking tasks (Boyer et al., 2017).
Our results are relevant to this question of whether auditory feedback from movement sonification can be used to convey the kind of spatial information normally conveyed via vision or proprioception.If auditory feedback from movement sonification could only ever be an external temporal stabilizing signal or at best provide feedback on timing for the control of velocity magnitude, then it would not be expected to facilitate online trajectory adjustments in a spatial task such as reaching for a target along an intended path.Our results suggest that our sonification was able to induce online adjustments in a spatial task.A strong effect on directional jerk, such as we observed, would not be expected if auditory feedback from movement sonification were limited to providing a mere stabilizing temporal signal.While the lack of an increase in error with the switch from online to terminal feedback suggests that these online adjustments are not real-time corrections, our finding that online feedback decreases error during movement learning suggests that information from the correlation of these adjustments with feedback is still nonetheless used for diachronic model adjustments which are corrective.A limitation of our study is that it leaves open whether this is (a) a fundamental limitation of neural plasticity, (b) a shortcoming of our sonification design, or (c) a limitation due to the limited training time.If (a), that would imply interesting limits to the flexibility of action-perception links.If (b), perhaps alternative motion-to-sound mappings would allow for online trajectory corrections.If (c), perhaps multiple training sessions over several days would lead to the emergence of more organized, corrective online adjustments.

Conclusion
We tested for feedback-based online trajectory adjustments in movement sonification by looking for changes in jerk and error during morphokinetic reaches towards an invisible target along an invisible path localized only by the sonification.We reasoned that differences in jerk would indicate differences in muscle twitching, suggesting online (real-time) adjustments.Consistent with the appearance of online adjustments, we found a significant increase in jerk during online sonification feedback, both between-subjects during learning, and within-subjects with a feedback switch from online to terminal feedback or terminal to online feedback.While we found that online feedback decreased error during learning compared to terminal feedback, participants who learned on online feedback actually decreased error further when switching to terminal feedback.This suggests that, with the sonification tested here, information from the correlation of online adjustments with feedback is used for corrective diachronic internal model adjustments, but it is not usable to make real-time trajectory corrections.
Our results are the first evidence for online trajectory adjustments in response to online movement sonification feedback.They add further support to a picture of movement sonification on which it can be more than a mere stabilizing temporal signal.Spatial trajectory adjustments and spatial motor learning in response to online sonification feedback suggests that spatial information from the sonification can be used to augment, or replace, spatial information normally conveyed by vision or proprioception.However, whether due to a fundamental limitation of perceptual-motor plasticity, a limitation of our chosen sonification, or our limited training time, we observed important differences in the way spatial information from sonification is used, compared to spatial information from vision or proprioception.While it's been long known that movement sonification can improve motor learning, little is known about how this improvement is divided between improved diachronic internal model adjustments and online trajectory adjustments.Our results take

Fig. 1 .
Fig. 1.Demonstration of the task.Participants began in the standardized initial position shown on the left (A) and reached a final target position as shown on the right (B).The orange arrow represents an invisible model reach.The blue arrow represents the path of the actual reach performed.Note the sensor placements on the forearm and upper arm and an optical tracking marker on the wrist.Participants were blindfolded, as shown.The red arrows show the coordinate axes of the exocentric spatial coordinates used by the opitcal tracking system (B).These arrows are shown superimposed on the square ruler used to define them when calibrating the optical system.Video demonstration of the task is available at https:// osf.io/pvwfm/.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

Fig. 3 .
Fig. 3. Reach paths, as recorded by OptiTrack in the exocentric spatial coordinates shown in Fig. 1B, from two participants.Units are in cm.All 100 reaches shown for both participants.An example participant for the terminal condition is on the left (A and C), and an example participant for the online condition is on the right (B and D).The top (A and B) and bottom (C and D) panels show different views of the same reaches.The first 25 reaches (NoFeedback-Block) are shown in gray.Reaches 26-75 (Main-Block) are colored by feedback type: red for terminal feedback, blue for online feedback.In Main-Block, colour is graded by reach number, so that later reaches are darker.Reaches 76-100 (Switch-Block) colored green.The model reach, selected randomly from the NoFeedback-Block reaches, is colored orange.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

Fig. 5 .
Fig.5.Dependent variable values for all reaches, by block and learning feedback.These box plots show the distribution (quartiles) for dependent variable values over all reaches.The left column shows rotational measurements, the right column shows spatial measurements.The top row shows jerk, middle row target error, and bottom row shows path error.Data is sorted by nesting learning feedback within block.For the jerk measurements (top row), notice that in Saturation-Block distribution for the reaches from the online learning feedback participants (blue box) is shifted slightly up compared to the distribution for the reaches from terminal learning feedback participants (red box).This reflects how learning with online feedback increased jerk in Saturation-Block.For Switch-Block, the opposite happens, reflecting how participants who learned with online feedback become less jerky when switched to terminal feedback and participants who learned with terminal feedback become more jerky when switched to online feedback.For target error (middle row) and path error (bottom row), notice that while the bottom two quartiles are comparable across learning feedback conditions in both blocks, the top two quartiles for online learning feedback are lower than the top two quartiles for terminal learning feedback.This suggests that while learning with online feedback did not make the most accurate reaches anymore accurate than learning with terminal feedback, it did make the least accurate reaches more accurate.An effect is interpreted as significant only if it's significant for both spatial and rotational data and the lower of the two significance levels is used.Significance levels: ns = ≥ 0.05, * = < 0.05, * * = < 0.01, * * * = < 0.001.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

Fig. 6 .
Fig.6.Trial-by-trial means for each of the six dependent variables.Blue and red lines show, for each trial, the mean value across all participants in a given learning feedback condition.Fill shows the standard error on the mean for that trial.The dashed line at trial 25 marks the end of NoFeedback-Block, while the dashed line at trial 75 marks the feedback switch.The left column shows rotational measurements, the right column shows spatial measurements.The top row shows jerk, middle row target error, and bottom row shows path error.For the jerk measurements (top row), notice that the blue line (online learning feedback) is higher than the red in Main-Block, while both blue and red lines cross immediately at the switch (trial 75).This represents the sudden change in jerk at the switch, those participants who received online feedback in Main-Block becoming much less jerky in Switch-Block and vice versa for those participants who received terminal feedback in Main-Block.While the effects for target error (middle row) and path error (bottom row) are less dramatic, observe that the blue line (online learning feedback) is lower (i.e., less error) than the red line (terminal learning feedback) in both Main-Block and after the switch, and also that the blue line continues to decrease even after the switch.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)