Touching with the eyes: Oculomotor self-touch induces illusory body ownership

Summary Self-touch plays a central role in the construction and plasticity of the bodily self. But which mechanisms support this role? Previous accounts emphasize the convergence of proprioceptive and tactile signals from the touching and the touched body parts. Here, we hypothesise that proprioceptive information is not necessary for self-touch modulation of body-ownership. Because eye movements do not rely on proprioceptive signals as limb movements do, we developed a novel oculomotor self-touch paradigm where voluntary eye movements generated corresponding tactile sensations. We then compared the effectiveness of eye versus hand self-touch movements in generating an illusion of owning a rubber hand. Voluntary oculomotor self-touch was as effective as hand-driven self-touch, suggesting that proprioception does not contribute to body ownership during self-touch. Self-touch may contribute to a unified sense of bodily self by binding voluntary actions toward our own body with their tactile consequences.


INTRODUCTION
Touching our hand, by contrast with that of a stranger, immediately tells us that it is our own. Hand-to-body movements elicit some of the earliest and ubiquitous sensorimotor experiences in life, and have already taken place in utero. 1 These experiences continue throughout the entire lifespan 2 with a frequency of up to 800 self-touches a day. 3 Self-touch is arguably one of the richest sensorimotor experiences, providing a tight contingency between motor, proprioceptive, and tactile information from both the touching and the touched body parts. Since the mid-20th century, 4 the unique sensorimotor contingency typical of self-touch has been linked with the development of self-awareness and body ownership. [5][6][7][8][9][10][11] Yet, the specific mechanisms behind the binding of self-touching actions with inner and outer sensations remain elusive. Earlier accounts stress the importance of the unique convergence of proprioceptive information generated by the moving effector and the tactile signals arising from the touched body region. [4][5][6]12,13 To the best of our knowledge, however, no study has directly investigated whether proprioception is a necessary for self-touch to modulate body ownership. The voluntary nature of self-touching movements might, in fact, be sufficient to explain the binding properties of self-touch. 11,[14][15][16][17] For example, Hara et al. found that voluntary self-touch boosts the sense of body ownership compared to passive self-touch and classical multisensory stimulation. 11 Disentangling the role of afferent proprioception from that of efferent motor signals in self-touch is difficult because self-touch normally involves voluntary limb movements, thus always generating proprioceptive signals from the moving effector. Here, we dissociated proprioception from other motor signals by creating a self-touch condition where tactile sensations were linked to voluntary eye movements. Eye movements do not depend on proprioceptive information as limb movements do. 18 It is known that the absence of proprioceptive afferent signals severely impairs limb movements, as demonstrated by studies with deafferented patients. 19 Conversely, lesions to the oculomotor system in monkeys 20 and in humans 18 do not impair the accuracy and precision of gaze pointing, suggesting that eye proprioception does not participate in visual localization. 21 Thus, in an artificially created oculomotor self-touch condition, the proprio-tactile convergence hypothesis would predict that eye movements would weaken an experience of body ownership, compared to ordinary, hand-driven, self-touch. We tested this hypothesis by comparing the effectiveness of hand versus eye movements in eliciting the so-called rubber hand illusion (RHI). 22 We used a 2 (effector type: hand, eyes) by 2 (spatial congruency: congruent, incongruent) within-participants experimental design to test a hand-driven self-touch condition ( Figures 1A and S1A) against an eye-driven

RESULTS
During the self-stimulation phase, participants viewed a rubber hand located in the vicinity of their occluded left hand. In the hand-driven ''proprioception'' condition ( Figure 1A), participants reached out and touched the index finger of a left rubber hand with their right hand. The impact was captured by a sensor concealed inside the rubber finger and was replicated by one of two vibrotactile transducers, stimulating either the participants' left index (congruent condition) or the little finger (incongruent condition) (see Video S1). In the eye-driven ''no-proprioception'' condition ( Figure 1B), participants first fixated a mark in the vicinity of their resting right hand and then voluntarily directed their gaze to the index finger of the rubber hand. An eye-tracker monitored the participant's gaze. Gazing on the index finger of the rubber hand triggered the reproduction of pre-recorded, realistic touch sensations on either the participants' left index (congruent condition) or little finger (incongruent condition) (see Video S2). To match the vision of the moving finger in the hand-driven condition, the participants' gaze was continuously indicated by a red dot projected on the table. Thus, the two self-touch conditions were identical in all respects apart from the effector used to perform the self-touching movements. The eye-driven ''no-proprioception'' condition provided comparable motor commands but with much weaker proprioceptive information than the hand movement condition. 18 A B Figure 1. Experimental conditions (A) In the hand-driven ''proprioception'' condition, participants' touch of the index finger of a rubber hand was replicated on either their left index (congruent) or little finger (incongruent) (see Video S1 iScience Article During the self-stimulation phase, participants were asked to report the onset time of the rubber hand illusion, with shorter onset times indicating a stronger illusion. 23 Participants then performed a cross-modal congruency task (CCT). 24 They identified the location of tactile stimuli delivered to either their left index or middle finger, while ignoring visual distractors projected on the fingers of the rubber hand. The task comprised thirty homologous and thirty non-homologous trials. Longer response times to non-homologous trials indicated a stronger interference between visual and tactile stimuli, and therefore a stronger illusion. 24   iScience Article DISCUSSION Self-touch plays a fundamental role in developing self-awareness and a sense of body ownership. Although previous studies have investigated the contribution of the tactile component of self-touch to self-representation, 5 the present study focused on the motor component of self-touch. Specifically, we tested whether proprioceptive information arising from the moving effector is strictly necessary for self-touch to modulate body ownership. Eye movements do not crucially rely on afferent proprioceptive signals as limb movements do 18 and eye movements do not typically produce tactile consequences. Despite the profound differences between eye movements and limb movements, coupling voluntary eye movements to spatially and temporally congruent tactile inputs induced an illusion of limb ownership as strongly and quickly as binding tactile inputs to limb movements. This effect was reliable across subjective and objective measures of body ownership and occurred without any preliminary training or habituation.
The present findings undermine earlier accounts of self-touch highlighting the contribution of a required convergence of tactile and proprioceptive signals. 4,12 Our results align with recent studies suggesting that active self-touch movements enhance body ownership compared to passive movements and play a key role in the spatial coherence of bodily self-awareness. 14,28 Yet, most of the previous studies on self-touch employed an active-versus-passive movements paradigm. Because proprioceptive signals were equally present in active and passive movements, those studies could not conclusively rule out that proprioception contributed to modulate body ownership through self-touching movements. Our findings, based on a newly introduced oculomotor self-touch paradigm, suggest that the convergence of proprio-tactile signals from the touching and the touched body parts on the sense of body ownership is only a special case of a more general mechanism.
Voluntary motor signals other than proprioception, such as motor imagery, 29 may shape the experience of body ownership because they bind motor commands to tactile outcomes, leading to a unified experience of a single acting and perceiving self. 16,17,30 In much the same way that turning a switch provides us with a sense of agency over the lights, 15 active self-touch on our body may provide us with a sense of bodily self. Future studies should identify the specific motor component responsible for the observed effects, by directly testing the role of the sense of agency, 15 efference copy, 31 motor imagery, 29 and causal inference. 17,32 Sensorimotor accounts of the visual experience posit that sensorimotor contingencies linking efferent oculomotor signals to afferent visual feedback are learned from specific oculo-visual invariances. 33 In ordinary conditions, eye movements affect visual inputs only, whereas reaching hand movements produce tactile feedback. A crucial prediction made by the sensorimotor theories is that an agent provided with new contingencies between movements and sensory outcomes should be able to learn new sensorimotor invariances and perceive through them. Studies on sensory substitution support this hypothesis. [34][35][36] Our study shows that the artificial coupling of voluntary eye movements with tactile feedback produces a sensorimotor contingency that affects bodily self-awareness.
The oculomotor system is thought to participate in the spatial re-mapping of visual, auditory, and tactile inputs. Animal studies show that the superior colliculus provides the neural substrate for controlling voluntary saccades and integrating visual, auditory, and tactile stimuli into a common reference frame. 37 Our finding may extend the scope of this mechanism by including the possibility that oculo-tactile spatial transformations can also modulate high-level cognitive processes such as bodily self-awareness, similarly to what it has previously been suggested for ordinary hand-driven sensorimotor experiences. 16 Humans automatically look at what they are about to touch, 38 but oculomotor and tactile events are not normally linked by a causal relationship since moving the eyes only affects visual inputs. The artificial coupling of tactile feedback in response to oculomotor signals in our experiment might have putatively leveraged a proactive function of eye movements 38 to induce an instrumental use of the gaze. Updating body-ownership by binding self-generated gazing toward our own body to tactile consequences may find applications in future prosthetic devices, virtual reality, and human-machine interaction techniques. 39

Limitations of the study
The present experiment tested hand versus eye movements only under active (i.e., voluntary) conditions. Therefore, our data cannot quantify any potential difference between illusory body ownership induced ll OPEN ACCESS by passive-versus-active movements. This choice was owed to the methodological impossibility of creating a passive eye movement condition. However, direct comparison of active-versus-passive movements has already been extensively investigated in previous studies. 8,11,14,28,40,41 Moreover, the present study did not aim to investigate voluntary movements per se. Instead, it was aimed to test whether proprioceptive signals during self-touch were strictly necessary to modulate body ownership.
The present study focused on the role of proprioception in self-touch but could not quantify the contribution of tactile sensations, which were already investigated in previous experiments. 5,14,28 Given that we used near-identical tactile stimuli in each condition, our results suggest that the effect of self-touch on body ownership goes over and above any potential effect of tactile stimulation. That is, the mere presence of a tactile stimulus coupled with a voluntary movement seems insufficient to modulate body-ownership, unless the spatial location of touch is also congruent with the location specified by the eye/hand motor plan.
Both our hand-driven and eye-driven self-touch conditions included some visual information (i.e., the vision of the moving finger and a red dot showing the participant's gaze, respectively, plus the rubber hand). Thus, the present experiment cannot rule out the possibility of a three-way interaction between motor, tactile, and visual signals. First, participants' eye movements were unconstrained during the hand-driven ''proprioceptive'' condition. Therefore, oculomotor (and visual) signals might have potentially contributed to the RHI in that condition. Future studies could rule out this possibility by investigating hand movements in the absence of vision (e.g., in darkness), or eye movements (e.g., by asking the participants to fixate on a specific point). However, performing blind hand movements toward a rubber hand would require extensive training, whereas we wanted to test untrained sensorimotor integration. Moreover, although blind hand movements are technically possible, equivalent no-vision eye movements are unfeasible, as voluntary saccades are highly inaccurate in darkness. 42 Crucially, the eye movements in our hand-driven ''proprioception'' condition were only incidental, whereas those in our eye-driven ''no-proprioception'' condition had a clear instrumental effect, directly causing the tactile sensations experienced by the participant. Thus, an eventual role of eye movements in our ''proprioception'' condition would only make our findings in the ''no-proprioception'' condition more striking, by showing that the summed effect of hand movements plus incidental eye movements can be equalized by instrumental eye movements only.
Besides eye movements, the specific visual inputs present in both our conditions might affect the strength of the RHI. In our eye-driven ''no-proprioception'' condition, we used a red dot to continuously show the participants' gaze throughout the block. This visual cue was necessary to enhance the internal validity of our study by ensuring that our eye-driven ''no-proprioception'' condition was visually matched to the hand-driven ''proprioception'' condition, where the participants could see their own hand performing the self-touch movements. Yet, previous studies have shown that it is possible to induce an RHI simply by creating a multisensory conflict matching a visual with a tactile stimulus, 43,44 therefore, one may ask how specific our result is for sensorimotor, oculotactile integration as opposed to simple multisensory integration. Eye movements in our ''no-proprioception'' condition have a clear instrumental effect, with the participants' saccades directly causing them to experience tactile sensations. That is, our artificial oculotactile coupling preserved the voluntary aspects of self-touch in that the participant experienced no tactile sensations if they decide not to perform a voluntary to saccade toward the rubber hand. In other words, compared to a classical visuotactile RHI stimulation, our condition necessarily involved a voluntary motor component akin to motor programming, efferent copy, and sense of agency. Although we did not collect any direct measure of the sense of agency for the visual cue in this study, all participants were able to internalize and utilize this new sensorimotor contingency after just a few seconds. In fact, the rapidity and effectiveness of our paradigm to induce a strong sense of control over a completely new oculotactile contingency is in itself an interesting result. Future studies directly address this question by investigating the role of sense of agency, 15 efference copy, 31 motor imagery, 29 and causal inference 17,32 in our paradigm.
Finally, it is known that visual signals that are incongruent with the representation of the body (e.g., seeing a block of wood instead of the rubber hand, or aligning the rubber hand in an anatomically incongruent position) weaken or abolish the RHI. Moreover, recent evidence shows that visual input can decrease the sensitivity of muscle proprioceptive feedback. 45 Future studies should specifically address the interaction between self-touch and vision, for example by testing self-touch for body parts that are commonly on sight (e.g., forearms) vs parts of the body that are rarely seen (e.g., back Our active rubber hand illusion paradigm has limitations that should be addressed in future studies. The classical control condition for the active rubber hand illusion introduces a short delay between the movement and the tactile sensation. In the present study, however, we only investigated the effect of spatial congruency of touch and oculomotor signals. The reason for this choice was that the hand and the eye motor systems have vastly different temporal properties, with eye movements being orders of magnitude faster than any other human movement. 46 Moreover, eye movements have a distinctly proactive nature, systematically anticipating hand-object contact by approximately $150 ms. 38 Given that crossmodal integration is affected by the latencies of the signals involved, 37 we surmised that oculotactile integration may have very different temporal properties than sensorimotor integration during hand movements. Future studies should directly address this question by testing the temporal dynamics of oculotactile versus limb sensorimotor integration. Nonetheless, the main effect of spatial congruency we found across all our dependent measures suggests that our spatial manipulation was as successful as the classical temporal manipulation in inducing different degrees of RHI. Second, besides self-reports, onset time, and CCT task, several other behavioral and physiological measures have been previously used to test the RHI, including the proprioceptive drift, skin conductance, skin temperature, and self-recognition. 25 In particular, given that the proprioceptive drift (i.e., the perceived displacement of the left hand toward the location of the rubber hand) was shown to be independent from sensation of ownership, 47 future experiments could test whether drift also occurs in gaze-contingent conditions similar to the one we developed.
Finally, previous studies have shown that simply seeing an object approaching (but not touching) a rubber hand located nearby the real hand can elicit a sensation of ownership over the rubber hand. 48 Although these findings generally show that top-down predictions play an important role in the RHI, eye movements do not normally generate tactile feedback as hand movements do. Therefore, one may speculate that the participants' prior expectation of a tactile stimulus would be much lower during our oculomotor self-touch condition than during normal, hand-driven self-touch conditions. Yet, we found strong RHI modulation after oculotactile stimulation. This result suggests either that our effect does not rely on the expectation of a tactile stimulus, or, alternatively, that such expectations are not disrupted by an artificial gaze-contingent tactile stimulation. Recent studies on the peripersonal space show that new statistical regularities can be quickly learnt, and new predictions can be created. For instance, pairing a visual (or auditory) stimulus with a tactile one can readily create new multisensory associations, 49,50 sometimes based on very rapid trial-by-trial recalibration. 51 Our results are in line with these findings and suggest that a new artificial oculotactile contingency can readily be learnt even if it involves self-touch and occurs on one's own body rather than in the peripersonal space.

STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:

Lead contact
Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Vincent Hayward (vincent.hayward@sorbonne-universite.fr).

Materials availability
This study did not generate new unique material.

Data and code availability
All data needed to evaluate the conclusions in the paper are present in the paper and/or the supplemental information.
This paper does not report original code.
Any additional information is available from the lead contact upon request.

EXPERIMENTAL MODEL AND SUBJECT DETAILS Participants
The sample size of the study (n = 24; 16 females; mean age GSD: 23.1 G 2.7) was decided a-priori on the basis of the results of previous Crossmodal Congruency Task (CCT) experiments used to quantify the strength of the RHI. 52 Twenty-eight right-handed healthy participants were originally recruited. One participant was excluded before the analyses due to technical issues with the tactile stimulation. Another participant was excluded from the analyses because they could not follow the instructions. Conforming with the standard procedure for CCT paradigms, 24,52 two further participants were excluded after visual inspection of the data because they produced more than 35% false alarms in the catch trials of at least one experimental condition (see below).
The experiment was run in accordance with the protocol approved by the research ethics committee of the School of Advanced Study, University of London. Recruitment and procedures adhere to the Declaration of Helsinki. All participants were naïve regarding the hypotheses underlying the experiment. All participants received written and verbal explanation of the general purpose of the study, provided their written informed consent before the beginning of the testing, and were compensated for their time at a rate of £7.5 per hour ($£10).  Instructions and visual stimuli were projected onto the table easel and the rubber hand using an LCD projector (Nec, model LT150, Tokyo) mounted above the desk. Participants' responses in different tasks were collected using either a mouse or a foot-pedal. Participants' eye movements were recorded via an EyeLink 1000 Plus eye tracker (SR Research, Canada) arranged in a tower-mount configuration. Participantswore wireless headphones (Cowin Electronics Co. Ltd., model E7, China) playing pink noise throughout the entire experiment to mask any eventual noise from the tactile actuators. All the software for the experimental tasks were coded in Matlab 2018a (The MathWorks, Inc.,USA) and Psychophysics Toolbox v3. 53,54 Setup for the hand movement condition

REAGENT or RESOURCE SOURCE IDENTIFIER
In the classical RHI, a trained experimenter uses two brushes to stroke homologous points on the participant's and the rubber hand in sync, producing a passive, visuo-tactile stimulation. 22 Voluntary self-touch versions of the RHI using virtual reality 55 or robotic mediation 11 have been also described. Here, however, we used an electronic circuit involving accelerometers and vibrotactile actuators to achieve a direct, unmediated haptic touch of the rubber hand (see the left panel in Figure S1A and Video S1).
In this ''haptic'' version of the RHI, participants used their own right index finger to directly tap on the index finger of the rubber hand and received a simultaneous and proportional mechanical touch on their own left hand, either on the same (index) or a different (little) finger. Tactile stimuli delivered on the rubber hand were detected by an accelerometer (SparkFun Electronics, model ADXL335, USA) located inside the index finger of the cosmetic glove, at the level of the intermediate phalanx. The analog output signal from the z-axis of the accelerometer was amplified (Shenzhen Cavins Technology Co., Ltd., Nobsound 50Wx2 Amplifier, China) to drive a vibrotactile actuator (Tactile Labs Inc., Haptuator Mark II, Canada) secured to the participants' left index finger with a Velcro strip. In this setup, the vibrotactile pattern produced by the participant's tap on the index finger of the rubber hand is instantaneously and faithfully reproduced on the congruent (index) finger of the participants' left hand. A second actuator was attached to the participants' little finger to create a spatially incongruent condition. An additional actuator was placed on the middle finger to perform the CCT task (see below). The appropriate actuator in each task and in each trial was activated by a computer-controlled switch. As the RHI is affected by the crossmodal correspondence of visuo-tactile stimuli, 25 Velcro strips were also placed on the intermediate phalanxes of the index, middle, and little fingers of the cosmetic glove, providing a visual matching for the sensation produced by the strips holding the actuators on the participants' fingers.

Experimental design
To compare the efficacy of hand and eye movements in inducing the RHI, we employed a 2 (effector-type: hand, eye) x 2 (spatial-congruency: congruent, incongruent) within-subject design. The effector-type factor was blocked and counterbalanced across participants, whilethe order of the spatial-congruency conditions was randomised within participant. To assess the strength of the RHI, we selected three of the best-established measures for RHI 25 that are thought to underly different aspects of the illusion. 47  iScience Article as our eye-tracking montage would have interfered with the pointing procedure. Other measures of RHI, such as galvanic skin conductance response to a threating stimulus and skin temperature were also discarded as they were difficult to integrate within our setup/procedure.

Procedure
Each of the four experimental conditions (hand/eye movements -spatially congruent/incongruent) was tested in a separate block. Each block was divided in three phases (see below and Figure S1B). Appropriate training for each of the three tasks was provided before the beginning of the experiment. The training ensured that the participants were able to produce accurate hand-and eye-driven self-touch movements in our setup and to perform well all the tasks in our experiment. Eye tracking calibration was performed at the beginning of each block.

Phase 1. induction of the RHI
An example trial of this phase can be found in the Video S2. At the beginning of each trial a 15 3 15 mm blue square was projected on the easel, 20 cm to the right of the rubber hand. Participants were asked to look at the blue square to start the task. In both the hand and the eye movement conditions, participants were asked to gaze at the blue square to start a trial. Once gazed, the blue square turned green, after a short delay, prompting participants to start a hand or an eye movement towards the rubber hand. In the hand movement condition, participants used their right index finger to gently tap on the green square (starting point) and then on the Velcro band on the index finger of the rubber hand (landing point). Participants laid their right arm on an articulated armrest support throughout, so to produce straight movements from the starting to the landing points.
In the eye movement condition, participants rested their right hand out of sight and instead moved their gaze alternatively from the green square (starting point) to the Velcro band on the index finger of the cosmetic glove (landing point). Given that in the hand movement condition vision of the moving hand was necessary to perform the task, in the eye movement condition participants' gaze was continuously shown by a 15 mm-diameter red circle projected on the easel. The visual cue thus ensured that the hand/eye movement conditions were comparable in terms of visual inputs (see Figure S5).
When a hand or an eye movement landed on the index finger of the rubber hand, participants received a simultaneous tactile stimulus on either their left index (spatially congruent condition) or little finger (spatially incongruent condition). The tactile stimuli in the hand movement condition were a direct replication of the signal recorded by the accelerometer, while the touches during the eye movement condition consisted of a playback of a pre-recorded signal, and was triggered by the participants' gaze crossing a threshold on the right of the landing point. Using a rightward threshold rather than the precise landing point allowed us to compensate for the technical delay between the eye-landing on the target point and the tactile feedback (see Figure S4). The relative distance between the threshold and the landing point was determined in an informal pilot study where the participants adjusted the position of the threshold until their eye-landing on the target point of the rubber hand and the tactile feedback felt synchronous. The intensity of the pre-recorded tactile stimulus was jittered (by multiplying the amplitude of the signal pre-recorded via an accelerometer by a factor between 0.1 and 0.3). The jittered intensity was implemented in order to match the slight intensity variability of a touch induced by hand movements. In the spatially congruent condition, tapping/gazing on the index finger of the rubber hand produced a touch on the homologous finger of the participant, while in the spatially incongruent condition, tapping/gazing on the index finger of the rubber hand produced a touch on the little finger of the participant. A new trial started 500 ms after the tactile stimulation. Participants performed a total of 60 eye/finger movements in each condition, corresponding to about two minutes of stimulation.
An estimate of the Onset Time (OT) of the illusion 23 was obtained in this phase. Participants were instructed to ''step on a pedal as soon as the rubber hand feels part of yourself''. They were also told that this may or may not happen in different conditions.

Phase 2. crossmodal congruency task (CCT)
Immediately after the RHI induction phase, participants performed a tactile discrimination task adapted from the Crossmodal Congruency Task described by Zopf and colleagues. 24 In each trial, a fixation cross was projected between the index and middle fingers of the rubber hand for 1.5 s. iScience Article yellow circle was projected on either the index or the middle finger of the rubber hand, flickering for 200 ms. After a jittered delay between 100 and 200 ms, a vibrotactile stimulus was delivered on the participants' left hand. The tactile stimulus consisted of a series of three short pulses (total duration: 100 ms) delivered either on the homologous or non-homologous finger, with respect to the visual stimulus. Participants were asked to ignore the visual distractors and to respond as quick and as accurately as possible to the tactile stimulation only, by clicking on the left or right button of the mouse, according to the location of the touch. Response mappings was fixed across participants 24 and required a left-click for a tactile stimulation on the middle finger and a right-click for a touch on the index finger. Feedback for correct, incorrect, or too-slow (i.e., R1.5 s) responses was provided after each trial.
To maintain the specific RHI conditions of a given block throughout the CCT task, after each CCT trial, participants were required to perform two further hand/eye movements from the crossmodal stimulation of that block. 24, 52 The CCT consisted of 30 homologous and 30 non-homologous trials presented in a fully randomised order. To ensure that participants looked at the visual stimuli, eight no-go trials were randomly presented during each block. 24 In these trials, visual stimuli were delivered on both fingers of the rubber hand, but only one finger of the participants' left hand was stimulated with a vibrotactile stimulus. Participants were instructed to withhold their response in these trials. Participants who responded to more than 35% of these trials were excluded from further analyses (two participants met this condition; see participants section).

Phase 3. subjective report (SR) of the RHI
We used two similar questionnaires to quantify participants' phenomenological experience of the RHI induced by either hand or eye movements. The questionnaires were completed at the end of each block. Each questionnaire was composed by eight items directly derived from previous RHI studies. 22 The eight items were divided in two categories: three experimental questions, aiming to quantify the strength of the illusion, and five control questions, aiming to assess participants' suggestionability: iScience 26, 106180, March 17, 2023 iScience Article The Onset Time of the RHI was analysed with a similar 2 3 2 rmANOVA. Yet, given that not all participants reported an illusion in all four experimental conditions, statistical analyses were only run on the data from participants who responded in all the blocks (n = 12; Figure 1D). The rationale for this choice is that missing values may potentially be due to a range of reasons, from not experiencing any RHI to forgetting the instructions. For example, during the debriefing at the end of the experiment a few participants even reported they had forgotten to step on the pedal because they were experiencing a strong RHI (see Figure S3 for details on formal analyses supporting this rationale).
Finally, given that Subjective Reports were given on a continuous visual analog scale (from À100 = strongly disagree to +100 = strongly agree) instead of a 7-points Likert scale, 22 participants' responses were also analysed through parametric tests. Participants' agreement with each statement of the questionnaire were first averaged across question type (i.e., RHI/control items), and then analysed in a 2 (spatial-congruency: congruent, incongruent) x 2 (effector-type: hand, eye) x 2 (item-type: illusion, control) rmANOVA.
For each of the three dependent variables, we expected a main effect of spatial-congruency, in line with the general principle that crossmodal stimulation must be spatially congruent in order to elicit the RHI. 37 Crucially, in line with our main hypothesis of similar RHI for hand and eye movements, we also predicted no significant interaction between main effects. Given that the experiment was designed to allow asserting the null hypothesis (i.e., eye movements and hand movements induce the same amount of RHI), non-significant interactions between spatial-congruency and effector-type were further tested through Bayesian t-tests on the difference between spatially congruent/incongruent conditions in each type of movement. This allowed us to determine whether the datasupported the null hypothesis or if, alternatively, the null result could reflect insufficient statistical power. 26