Constraining the Sense of Agency in Human-Machine Interaction

Abstract One of the most significant issues in Human-Machine Interaction relates to how much autonomy delegates to automation and whether this could degrade the human perception of control, referred to as Sense of Agency. In this study, the What-Whether-When model of intentional action was used to look for variations in the Sense of Agency by measuring the Intentional Binding effect when the human is told either what action to perform or whether to act or when to act. Participants were asked to reproduce the time interval between a keypress and an acoustic tone (delivered at different time intervals). In Experiment 1, this action could be entirely voluntary or fully constrained by the computer. In Experiment 2, the computer constrained only one decision component at a time (either what, whether, or when). Experiment 1 indicates that Intentional Binding is increased for voluntary actions. Results from Experiment 2 suggest that selecting what to do, whether, and when to act have different effects, with the Sense of Agency being degraded when participants were told what action to perform and whether to act. Further, results show the presence of a specific window of opportunity needed for the Sense of Agency to develop for each constraint.


Introduction
With the spreading of technology usage across every field and aspect of everyday life, integrating human and machine competencies has become essential (Zanatto, 2019).Nevertheless, how to successfully make humans interact with technology is still problematic.Behind such difficulty is the lack of a common language between the two parties.On the one hand, the user is often incapable of interpreting and understanding the machine's signals and procedures, while on the other, the machine often struggles to assess the user's intentions and actions.HMI is a collaborative environment, where tasks are re-distributed among humans and non-human agents with different degrees of responsibility.Therefore, humans could face different situations that vary on a continuum of fully voluntary actions to fully stimulus-driven (automated) actions.
Research in the field of HMI has been working in different directions, from situation awareness, and interface design, to trust and acceptance.More recently, the Sense of Agency (SoA) has also become the focus of attention (Berberian et al., 2012).

The sense of agency
The SoA is defined as the perception of being in control of one's actions.Primarily described as the awareness of initiating an intentional action whose consequences are known, it is associated with a person's intention to provide a change in the environment (Chambon et al., 2014).At the core of the SoA is therefore the association between a voluntary action and an outcome (Haggard, 2017).
According to the Comparator Model (Frith et al., 2000), the SoA is tied to sensorimotor processes.Depending on our intentions, a representation of the desired motor state and subsequent motor commands are created.Furthermore, the predicted sensory consequences based on the motor commands are produced.The movement derived from the motor commands generates sensory feedback for the motor system about the state of the movement which is compared with the predicted one.The SoA arises when the predicted sensory feedback is identical to the actual sensory feedback.If the actual sensory consequences do not match the predicted sensory consequences, like in the case of movements induced by external forces, SoA will not be elicited.
The Comparator Model explanation for the SoA has been supported by several studies on the perceived intensity of selfproduced tactile sensations compared to externally produced ones (Blakemore et al., 2000;Blakemore et al., 1998Blakemore et al., , 1999)).
Differently from the Comparator Model, the Theory of Apparent Mental Causation downplays the contribution of the motor system on the SoA, assuming the presence of an unconscious causal pathway that is responsible for the action, while the intention to act and the act itself remain conscious events that contributes to the illusion of having caused our actions.According to the theory, the SoA is determined by the association between one's thoughts and actions.If the intentions to act are temporally initiated before the action, are consistent with the action itself, and are assumed to be the only plausible cause of the action, then we will experience the illusory feeling that our intentions caused our actions.
As for the Comparator Model, also the Theory of Apparent Mental Causation has been supported by a number of studies.For example, the use of priming thoughts about an action seems to contribute to the illusory SoA for such action (Aarts et al., 2005;Wegner et al., 2004;Wegner & Wheatley, 1999).Furthermore, manipulation of contextual information about the causation of action could influence the SoA (Desantis et al., 2011).
While the Comparator Model and the Theory of Apparent Mental Causation are offering competing explanations for the SoA, Moore and Haggard (2008) showed that both internal sensorimotor prediction and external action outcomes can contribute to the SoA.Such finding has contributed to the development of a "cue integration theory" for the SoA (Moore et al., 2009), where the SoA is a reflection of various sources of cues (both external or internal) and their influence depends on their reliability.
Implicit indicators of the SoA have so far been focused on the variations of time perception, investigating primarily the so-called Intentional Binding (IB) effect (Haggard et al., 2002).Participants are usually asked to either report or reproduce the perceived time interval that occurred between a voluntary action and their outcome (often in the form of a sensory event).Literature has overall demonstrated that voluntary actions are perceived as having a compressed time duration when compared to control conditions in which the action and the outcome were in-dependent (Ebert & Wegner, 2010;Haggard et al., 2002;Wenke & Haggard, 2009;Wohlschlager et al., 2003).Nevertheless, time perception can be dependent upon other factors falling outside the envelope of intentionality, such as causality (Buehner & Humphreys, 2009) and attention (Haggard & Cole, 2007).Therefore, the compression of time perception should not be considered a diagnostic indicator for the SoA.Rather, the difference in time perception between conditions could be potentially used as an implicit measure for the variations of the SoA.
The feeling of being in control is crucial for HMI.This has a significant impact on the human motivation to perform a collaborative task with the machine, as well as improving the overall quality of the task performance.Also, improvement to the control perception has been found to affect the attribution of responsibility, attention over the task, and acceptance of the system (see Bednark & Franz, 2014;Caspar et al., 2016;Eitam et al., 2013;Shneiderman & Plaisant, 2010;Wen & Haggard, 2018).Previous studies showed a detrimental effect of automation over human SoA (Berberian et al., 2012;McEneaney, 2013;Obhi & Hall, 2011).The more the machine is in charge of the task, the less the human perceives control over it.This has also been confirmed by our previous experiments (Zanatto et al., 2021a(Zanatto et al., , 2021b) ) where a keypress triggering an acoustic tone could be delivered by either the participants or the computer.When the computer was in charge of performing a keypress to trigger the sound, participants' SoA was reduced.However, when the computer delivered only a warning over the keypress task performed by the participants, IB significantly increased.
Nevertheless, the environment in which the human is willing to make changes often offers a wide range of decision alternatives associated with alternative actions (Cisek, 2012;Rosenbaum et al., 2013) and it is plausible that the SoA would be differently involved with different evaluating options.

The What-Whether-When Model of intentional action
According to the What-Whether-When (WWW) Model, three decision components are critical to intentional action (Brass & Haggard, 2008;Haggard, 2008): a component related to action selection, i.e., the decision about which action to perform (What component); a component about whether to perform the selected action (Whether component); and finally, a component related to action timing, i.e., the decision about when to perform the selected action (When component).It has been demonstrated that what, whether, and when decision components relate to different neural processes, occurring in different regions of the brain (Brass et al., 2013;Krieghoff et al., 2011;Zapparoli et al., 2018).It has also been demonstrated that those decision components are associated with different motor performances (Becchio et al., 2014).Therefore, in the same way one expects SoA to differ between intentional and stimulusdriven actions, we would expect SoA to be selectively affected by each of the decision components.
Similar to the WWW model of intentional action, in hybrid HMI environments, cooperation between humans and machines imposes a choice on task allocations: for example, the human and the machine could be assigned to a specific work (What component) or the task allocation could be assigned alternatively to the machine or the human (Whether component) or the human contribution could be only limited to monitor the machine work and intervene when necessary (When component).
Following the evidence of different neural pathways, as well as selective motor performances for the components of intentional action, and assuming the presence of similarities between those decision components and the loss of autonomy in HMI, the present study suggests that the WWW model of intentional action can be used to test how SoA changes in hybrid HMI.In other words, how does the user's SoA change when the system is constraining their action, by telling them what/whether/when to act?Thus, the reduction of the SoA in HMI could be explained in terms of decision allocation.If we assume that SoA is dependent upon whatwhether-when, then it is plausible that the human SoA could be differently affected by a machine depending on which decision constraint is applied.
A first attempt to examine the effect of constraints on SoA was conducted by Wenke et al. (2009).They manipulated whether participants could freely choose or were instructed to press one of two possible keys (free vs. restricted action selection) and when to press it (free vs. restricted action timing).IB was measured using the Libet clock.Results showed that being partially constrained reduces IB compared to when an action was both generated fully constrained or fully voluntary.In Barlas and Obhi (2013), participants could freely choose a button to press among seven/three/one options.They pressed the button at a self-determined time while watching the rotation of the Libet clock.The IB was strongest with the seven buttons option compared to the three and one options.A recent study by Antusch et al. (2021) contradicted the previous findings.The authors used the Libet clock paradigm to assess how constraining an intentional action could affect the SoA.In three experiments, participants judged the onset of their keypress and the resulting auditory tone under conditions of no, partial, or full autonomy over selecting and timing their actions.Results showed no differences in IB between conditions.This last finding suggests that being unable to decide What and When to perform actions does not affect SoA.

Aim of the present study
Research has applied a wide range of constraints to assess how the limitation of autonomy can affect SoA.However, those limitations were mainly applied to the "what" component and partially the "when" component.Further, these studies did not directly compare conditions in which an action is either entirely voluntary or entirely externally constrained.To the best of our knowledge, no study has investigated whether all these different decision components also exert a specific influence on the SoA in HMI.Following this approach, the present study is designed to investigate whether and how voluntary decisions and their constrictions shape the SoA.Specifically, we aim to determine: (i) how voluntary control has an impact on the SoA over the execution of an action; (ii) to what extent specific decision components contribute to the SoA of voluntary and stimulus-driven actions.
Investigations on the selective role of what-whether-when constraints on HMI could be beneficial to identify what type of task keeps the user in control.If a hybrid autonomous environment is a suitable ground for the user to be in control, as demonstrated by literature, knowing also what kind of instruction can further improve or reduce the user's SoA, would be useful for future system design.
In two experiments, whose structure was partially taken from Becchio et al. (2014), participants were asked to estimate the time interval between either voluntary actions or constrained actions in the form of a keypress and the following acoustic tone.In the first experiment, participants could be either free to choose what, whether, and when to act, or either be fully constrained by the computer on such decisions.For this first experiment, we expected IB to be reduced in the fully constrained condition.Further, as previous experiments demonstrated that longer time delays are suitable for the SoA in a partially assisted task (Zanatto et al., 2021a(Zanatto et al., , 2021b)), we expected IB for longer delays also in a fully constrained environment.
In the second experiment, what, whether, and when decision components have been dissociated and independently manipulated to clarify the differential contribution of each component to the SoA.In this experiment, we hypothesised to see variations of the SoA dependent upon every single constraint.Further, such variations should show a reduction in IB when compared to a fully voluntary condition.Finally, time delay should affect each constraint manipulation differently.This would follow our previous studies and the hypothesis of a specific window of opportunity dependent upon the complexity of each task (Berberian et al., 2012).

Experiment 1. Voluntary action vs constrained action
In this experiment, participants were asked to press a key on the keyboard under either a "Constrained" or a "Voluntary" condition.In the Constrained condition, the action sequence was entirely predetermined.Participants were instructed regarding which key to press, whether, and when to perform the action.In the Voluntary condition, participants could freely choose what key to press, as well as whether, and when to act.
If intentional and stimulus-driven perceptions of control are independent, we would predict that the person's time estimation in the Voluntary condition would be shorter than in the Constrained condition.

Participants
Seventy-six individuals (36 female, 38 male, 2 non-binary, mean age ¼ 20.90, SD ¼ 1.81, range 18-30 years old), participated in the study.A sample size analysis for the effect of constraints and time delays on the participants' estimation of time has been computed using G Power 3.1 (Faul et al., 2009).A sample of 67 participants would provide 80% statistical power for detecting a medium-sized effect (f ¼ 0.15), assuming an alpha level of 0.05.Participants were recruited through the University students' mailing list.All participants were naive as to the purpose of the investigation and gave informed written consent to participate.The study was approved by the University of Bristol Research Ethics Committee.Participants received a token payment for their time.

Materials and design
The study was conducted online using Pavlovia open science repository (https://pavlovia.org) and visual stimuli were coded using Psychopy (Peirce et al., 2019), with black text on a white background (font ¼ Arial, letter height ¼ 2.8 deg).The experiment was coded on a 14" monitor with a refresh rate of 60 Hz.Scaling has been maintained to 1 so that participants' screens will show stimuli with the same ratio online.They were asked to complete the experiment on a computer with a physical keyboard.
The experiment was counterbalanced in a two (condition: Constrained or Voluntary) by five (Time Delay between 500 and 2500 ms) within-subject design.The condition was manipulated in the action phase, where (a) participants would be in charge of triggering a sound event by following the computer instruction (Constrained condition); (b) participants would be free to press a key at any time or delegate the action to the computer to trigger a sound event (Voluntary condition).Once the keypress was per-formed and the sound event was delivered, participants were asked to reproduce the time interval between the keypress and the sound event (interval estimation phase).
A stimulus interval (Time Delay) between the keypress and the sound event was added with a time delay of between 500 and 2500 ms.Participants were presented with all the time delays (at a rate of 500 ms) in each condition.
The experiment was composed of two blocks, one for each condition.The order of presentation for the conditions and time delays was randomised and counterbalanced within blocks.

Procedure
Participants were instructed to trigger a sound event by pressing a key on their keyboard (action phase) in collaboration with the computer.Once the sound event was delivered, participants were asked to reproduce the time interval that occurred between the trigger keypress and the sound event (interval estimation phase).
In the Constrained condition, the action sequence to trigger the sound was entirely predetermined by the computer.Before the beginning of each trial, the participants were instructed to press the A key or L key.A message on the screen appeared, declaring "in this trial, press 'A' (or 'L')."Prior to action onset, a visual message informed participants on whether or not an action had to be performed ("Get ready to press the key/DO NOT press the key").In the case where an action was to be performed, a second message delivered 2000 ms after the first indicated when the action should start.At due time, a "GO!" message appeared, indicating that action has to be performed immediately by the participants.Participants were given 1s to perform the keypress.If they did not comply with the task or did not press the key within 1s, an error message was displayed ("Oops, that was wrong") and the trial was repeated.In the case action was not required by the participants, the sound of a virtual keypress was played after 2000 ms to indicate that the action was performed by the computer.The sound was presented for 100 ms (a tone of 4000 Hz).The block consisted of 25 trials for each key (20 Go trials; 5 No Go trials), for a total of 50 trials.
In the Voluntary condition, participants were asked to freely select what key to press, whether to do the action and when to make it.After the beginning of each trial, a message was displayed to communicate that the participants were given 5s to decide which key to press, whether to act and when to start their action.If no action was taken by the participants within 5s, a message was communicated that the keypress would be performed by the computer.A virtual keypress was played after 2000 ms.The sound was presented for 100 ms (a tone of 4000 Hz).This block also consisted of 50 trials.
In the second part of each trial, participants completed the interval estimation phase.Once the keypress was performed, a tone was presented following a specific time interval (500 ms-2500 ms at a 500 ms pace).The sound was presented for 100 ms (tone of 460 Hz).The tone occurring after the keypress was the same for both conditions.The estimation task began 1000 ms after the tone.The word "Estimate" appeared at the centre of the screen, indicating that participants could make their estimation.To do so, they were invited to press the spacebar twice, the first time to begin the estimation and the second to end it.During the estimation, a timer was displayed at the centre of the screen.
Participants did undergo a practice session at the beginning of each experimental block.For both conditions, participants performed five familiarisation trials, one for each time delay.In the Constrained condition, participants were asked to perform the keypress in four trials (two for each key) while no action was required only in one trial.
At the end of the experiment, participants were thanked and debriefed.

Data analyses
Participants' SoA was assessed by testing the effect of constraints and time delay on the perceived interval in the form of Estimation Score.This was calculated as the difference between the interval reproduced by the participants and the actual time delay.A negative Estimation Score would signal a shorter perceived time interval, and its variations could be used as an implicit indication for variations in the SoA.
Offline pre-processing of behavioural data and statistical analysis were performed with custom scripts with the R statistical software.The mean-based method was used to detect outlier performances on the Estimation Score.A participant was considered an outlier if the absolute difference between their mean Estimation Score and the sample mean was more than two Standard Deviations away (in either direction) from the mean.Data from five participants were removed because they produced extreme interval reproduction estimations, and these were not submitted for further analysis.Therefore, seventy-one participants have been included in the analysis.
Table 1 shows the frequency of keypress performed for each condition, the number of times each key was pressed, and the mean response times.

Active vs passive trials
In order to assess the validity of the testing design, preliminary comparisons between active and passive trials have been performed.In passive trials, participants did not perform any actions and therefore time intervals should be perceived differently from the active trials.
Data have been split into active and passive trials and a t-test was performed to look for significant variations in the Estimation Score.Results showed a significant difference between active and passive trials (t(63) ¼ 13.60, p < 0.001, d ¼ 0.59).Passive trials reported a positive Estimation Score (mean ¼ 36.84,SD ¼ 114.04), while an underestimation of time was found for active trials (mean ¼ À39.57, SD ¼ 119.01).In Figure 1 Estimation Scores for both active and passive trials and for the different time delays are also shown.

Constrained vs voluntary
Trials in which participants did not perform any action have been removed to look for variations of time interval perception in the active trials.A repeated measures analysis of variance (ANOVA) with Estimation Score as the dependent variable, condition (constrained/voluntary), and time delay (500-2500 ms) as within-subject factors was conducted.Post hoc comparisons were assessed using t-tests and Bonferroni's correction was applied when needed.
Finally, a significant two-way interaction between condition and time delay was found (F(4, 252) ¼ 3.66, p ¼ 0.006, gp 2 ¼ 0.01).However, the effect size was almost null and hence, the interaction was not considered for further analysis.

Discussion
For this experiment, there were two main hypotheses as previously presented.First, we expected variations on the IB effect between the Constrained and the Voluntary condition, driven by a reduction of freedom of choice.This hypothesis has been confirmed.Participants' estimation was significantly longer in the Constrained condition.Although participants showed an IB effect in the Constrained condition, this was significantly weaker than in the Voluntary condition.The experiment has confirmed that SoA in HMI can be associated with the degrees of freedom given to the human.In other words, SoA is affected by the number of options available to the user.The comparison between passive (where no action was performed by the participants) and active trials (where participants triggered the acoustic tone either voluntarily or under constraint) showed an absence of IB for the former.Nevertheless, in a fully constrained environment, participants underestimated the time interval.This is hypothesised to be mostly due to the physical involvement in the task where the action could be considered the primary component of the experience of SoA.This result opens the possibility of further implementing SoA features in an HMI interface.
In a recent paper by Vantrepotte et al. (2022), participants' SoA in an avoidance task was measured.Levels of automation were varied by implementing conditions in which the participant was free or not free to choose which direction to take.Results showed IB in the forced-choice condition which was explained as a function of the assistance provided by the system to the human operator.Further, in a tracking task in which participants continuously monitored a moving target through a cursor controlled by a joystick under different levels of automation, Ueda et al. (2021) showed that the SoA of the participants was enhanced by increasing automation but began to decline when the level of automation exceeded 90%.Following these results, it is plausible to assume that allowing operators a little contribution to the task with an automated tool is enough to maintain their SoA.
In line with these studies, the results of Experiment 1 could indicate that being assisted by the computer does reduce uncertainty over the task.Therefore, providing some level of assistance to the user in HMI could improve their performance without affecting their sense of control (Berberian, 2019;Coyle et al., 2012;Ueda et al., 2021).
The second hypothesis has also been confirmed.We expected IB to increase for longer time delays, as demonstrated in previous experiments (Zanatto et al., 2021a(Zanatto et al., , 2021b)).As shown in Figure 2, for the Constrained condition, IB was found for time delays longer than 1000 ms.This replicated previous evidence that the system needs to deliver instructions in a timely fashion so that the operator's action is perceived as intentional.Further, it supports the hypothesis that there is a specific window of opportunity for action, dependent upon the characteristics of the task, as suggested by Berberian et al. (2012).
The present results laid the foundation for the next experiment where the contribution of every single type of constraint on the SoA is assessed.In HMI, it is not always possible to face a condition in which the system can fully guide the operator.Rather, it might be the case of seeing partial automation contributing to the task.This could also be affected by the type of environment.Finding the contribution of each constraint could lead to further improvement of new systems and interfaces, provided that these contributors are tested in different environments in the future.

Experiment 2. Voluntary action vs single constraints (What, Whether, When)
To clarify the contribution of specific decision components to the participants' SoA, the What, Whether, and When components of actions in the action phase were in-dependently constrained.Stimuli, apparatus, and procedure were the same as for Experiment 1, with the exception that four experimental conditions were created: "What-constrained," "Whether-constrained," "When-constrained," and "Voluntary."

Participants
Seventy-five subjects (38 females, 36 males, 1 non-binary, mean age ¼ 21.50, SD ¼ 3.26, range 18-39 years old), participated in the study.As for study 1, a sample size analysis for the effect of constraints and time delays on the participants' estimation of time has been computed using G Power 3.1 (Faul et al., 2009).A sample size of seventy-four participants would provide 80% statistical power for detecting a medium-sized effect (f ¼ 0.15), assuming an alpha level of 0.05.Participants were naive as to the purpose of the investigation and gave informed written consent to participate in the study.The study was approved by the University of Bristol Research Ethics Committee.Participants received a token payment for their time.

Materials and design
The study was conducted online using Pavlovia open science repository (https://pavlovia.org) and visual stimuli were coded using Psychopy (Peirce et al., 2019), with black text on a white background (font ¼ Arial, letter height ¼ 2.8 deg).The experiment was coded on a 14" monitor with a refresh rate of 60 Hz.Scaling has been maintained to 1 so that participants' screens will show stimuli with the same ratio online.They were asked to complete the experiment on a computer with a physical keyboard.
The experiment was counterbalanced in a four (condition: What-constrained, Whether-constrained, When-constrained, Voluntary) by five (Time Delay between 500 and 2500 ms) within-subject design.The condition was manipulated in the action phase, where (a) participants would be indicated which key to press to trigger a sound event (What-constrained condition); (b) participants would be indicated whether to press a key to trigger a sound event (Whether-constrained condition); (c) participants would be indicated when to press a key to trigger a sound event (When-constrained condition); (d) participants would be free to press a key at any time or delegate the action to the computer to trigger a sound event (Voluntary condition).Once the keypress was performed and the sound event was delivered, participants were asked to reproduce the time interval between the keypress and the sound event (interval estimation phase).
A stimulus interval (Time Delay) between the keypress and the sound event was added with a time delay of between 500 and 2500 ms.Participants were presented with all the time delays (at a rate of 500 ms) in each condition.
The experiment was composed of four blocks, one for each condition.The order of presentation for the conditions and time delays was randomised and counterbalanced within blocks.

Procedure
Participants were instructed to trigger a sound event by pressing a key on their keyboard (action phase).Once the sound event was delivered, participants were asked to reproduce the time interval that occurred between the trigger keypress and the sound event (interval estimation phase).
In the What-constrained condition, prior to the beginning of each trial, participants were instructed to perform a keypress action toward the key "A" or the key "L".They were free to decide whether to act and when to press the key.After an initial message containing an indication of which key to press was displayed, participants had 5s to decide whether and when to act.If no action was taken after 5s, the computer would virtually perform the keypress task as in the previous experiment.
In the Whether-constrained condition participants were free to choose which key to press and when to start the action, but whether to act was dictated by a message from the computer.In this condition, a Go/NO Go message would appear on the screen at the beginning of each trial ("Get ready to press the key/DO NOT press the key").
Participants had 5s to decide which key to press and when to start the action in case they were ordered to act.If not allowed to perform the keypress, the computer would virtually press the key as in the previous experiment.
In the When-constrained condition, participants were free to decide which key to press and whether to act, but not when to start the action.In this condition, a message appeared on the screen to indicate the onset of the action.As in the previous experiment, the message was delivered 2000 ms after the start of the trial.Participants had 1s to act.If no action was performed after 1s, the computer would virtually perform the keypress task.
The Voluntary condition was the same as in Experiment 1.The whether-constrained consisted of 50 trials, 40 active trials where participants were forced to perform the keypress, and 10 passive trials in which the keypress was performed by the computer.The remaining conditions consisted of 40 trials each.
In the second part of each trial, participants completed the interval estimation phase as in Experiment 1. Participants did undergo a practice session at the beginning of each experimental block.For all four conditions, participants underwent five familiarisation trials, one for each time delay.In the Whether-constrained condition, participants were asked to perform the keypress in four trials while no action was required only in one trial.

Data analysis
Participants' SoA was assessed by testing the effect of each constraint and time delay on the perceived interval in the form of Estimation Score.This was calculated as the difference between the interval reproduced by the participants and the actual time delay.Variations in the Estimation Score would be used as an implicit indicator for variations in the SoA.
Eight participants were excluded from the final analyses for not complying with the experiment instructions (e.g., not terminating the study, or leaving the interval estimation going for more than 1 min).Furthermore, offline pre-processing of behavioural data and statistical analysis were performed with custom scripts with the R statistical software.The mean-based method was used to detect outlier performances on the Estimation Score.A participant was considered an outlier if the absolute difference between their mean Estimation Score and the sample mean was more than two Standard Deviations away (in either direction) from the mean.Four participants were excluded from the analyses for producing extreme estimations.Therefore, sixty-three participants have been included in the analysis.Table 2 shows the frequency of keypress performed for each condition, the number of times each key was pressed, and the mean response times.

Active vs passive trials
In order to assess the validity of the testing design, preliminary comparisons between active and passive trials have been performed.In passive trials, participants did not perform any actions and therefore time intervals should be perceived differently from the active trials.Data have been split into active and passive trials and a t-test was performed to look for significant variations in the Estimation Score.Results showed a significant difference between active and passive trials (t(62) ¼ 8.73, p < 0.001, d ¼ 0.49).Passive trials reported a positive Estimation Score (mean ¼ 70.95, SD ¼ 79.43) which was significantly greater than in the active trials (mean ¼ À14.60, SD ¼ 64.10).In Figure 3 Estimation Scores for both active and passive trials and for the different time delays are also shown.

The effect of each constraint
Trials in which the acoustic tone was triggered by the computer were removed to look for variations in the participants' SoA in the active trials.A repeated measures analysis of variance (ANOVA) with Estimation Score as the dependent variable, condition (Voluntary/What-constrained/Whetherconstrained/When-constrained), and time delay (500-2500 ms) as within-subject factors, was conducted.Post hoc comparisons were assessed using t-tests and Bonferroni's correction was applied when needed.

Results
A significant main effect of condition was found (F(3, 186) ¼ 36.21,p < 0.001, gp 2 ¼ 0.37).IB was found for both Voluntary (mean ¼ À50.28 ms, SD ¼ 112.07) and When-constrained (mean ¼ À35.48 ms, SD ¼ 121.66) conditions.Post hoc comparisons showed that the Estimation Score in the Voluntary condition was significantly more negative than in the SD ¼ 129.74) and in the Whether-constrained (mean ¼ 32.28, SD ¼ 91.96, p s < .050).Further, the When-constrained condition obtained a lower Estimation Score than the What-constrained and Whether-constrained conditions (p s < .050).No other significant differences were found (p > .050).
Finally, a significant two-way interaction between condition and time delay was shown (F(12, 744) ¼ 5.65, p < 0.001, gp 2 ¼ 0.02).However, the effect size was too small, and the effect was not further investigated.

Response time on active trials
For the active trials, the time that participants took to press the key to trigger the acoustic tone was also recorded.While this dependent variable was of no interest in the previous experiment (given that the Constrained condition left no options to the participants), in this second experiment the response time could reveal some useful insights.Given that the When-constrained condition was delivering a time to act to the participants, it has been removed from this analysis.A repeated measures analysis of variance (ANOVA) with Response Time as the dependent variable, condition (Voluntary, What-constrained, Whether-constrained) and type of key pressed (A or L) as within-subject factors was conducted.Post hoc comparisons were assessed using t-tests and Bonferroni's correction was applied when needed.Condition did affect participants' response time (F(2, 62) ¼ 10.63, p < 0.001, gp 2 ¼ 0.36).Participants triggered the sound significantly earlier in the Voluntary condition (mean ¼ 572.27 ms, SD ¼ 142.49) when compared to both the What-constrained (mean ¼ 674.54 ms, SD ¼ 133.67, p < 0.050) and the Whether-constrained conditions (mean ¼ 675.79 ms, SD ¼ 119.26, p < 0.050).No other significant effects were found (p s > 0.050).

Discussion
For this experiment, it was hypothesised that variations of the SoA would be shown on every single constraint.In the What-constrained condition, participants' estimation was longer than the real time delay, thus supporting the evidence that SoA can be compromised by the lack of options available (Barlas & Obhi, 2013).Being forced to act, with no other precise indication (as for the Whether-constrained condition) did have a similar effect as being told what option to choose; no IB was found, and the Estimation Score was not different from the What-constrained condition.These results lead to the conclusion that being told what to do and whether to act could have significant effects on the user's SoA.Furthermore, the analysis of the response times revealed that participants did trigger the acoustic tone significantly later in time in both the What-constrained and the Whether-constrained conditions compared to the Voluntary condition.While no clear theoretical explanation  can be drawn from such results, we hypothesised this could either reflect the participants' need for re-programming their intentional action, or a precautionary attitude towards the task to avoid mistakes.This, however, needs further experimental assessment to find a clear correlation between the SoA and the response times.Estimation Score in the When-constrained condition was not significantly different from the Voluntary condition, thus indicating that potentially the time component of intentional action can be considered as least important.In other words, indicating the time of action to the users, leaving them in charge of deciding how to conduct such action, could be enough to keep them involved in the task.However, removing the context of the action and its target equally reduces control over the task.
As in the first experiment, we also hypothesised that time delay would differently affect each constrained condition.The effect size for the two-way interaction between condition and time delay was too small to evaluate further such interaction.However, Figure 4 shows how SoA develops differently in each condition for the different time delays.Although no IB for the What-constrained and the Whetherconstrained conditions was found for the main effect of condition, this seems to appear for longer time delays.Specifically, in the What-constrained condition, IB is found for delays longer than 2000 ms.For the Whether-constrained condition instead, 1500 ms are needed to find IB.Given the lack of a significant effect, these results only show a trend and partially support our hypothesis.Nevertheless, such a trend is in line with the hypothesis of a specific window of opportunity for the SoA dependent upon the complexity of the task.

Conclusion
The present study aimed at investigating the effect of decision constraints on the SoA.In Experiment 1, results showed that IB is present in a fully constrained condition.This has been interpreted in terms of action execution, with the physical act of triggering the acoustic tone being performed by the participants.Under this interpretation, participants would still experience a degree of responsibility for the task, leading to IB.Nevertheless, a reduction in IB has been found for the Constrained condition when compared to the Voluntary.A possible interpretation is linked to the level of assistance provided by the computer as reported by Vantrepotte et al. (2022).In that study, IB was still detected for conditions in which the machine was substantially automated.In line with the action deployment explanation, a highly automated condition could be viewed as a scenario in which the machine is guiding and assisting the user during the task but leaving the final implementation of the action to the user.This means that future design for hybrid HMI systems needs to consider an assistive type of approach to reduce the chances of the Out-of-the-Loop (OOTL) experience.
The second experiment was designed to test the effect of every single constraint on the SoA.Results showed that constraining the time of action, leaving participants free to decide what action to take and whether to perform it, has a comparable effect on the IB as the Voluntary condition.On the opposite, reducing the number of options avail-able can compromise the participants' SoA, as previously reported by literature (Barlas & Obhi, 2013;Wenke et al., 2009).This result confirmed the hypothesis that each decision component has a different effect on the SoA.Our interpretation of such results suggests that a re-programming process is behind the reduction of SoA.Participants would be forced to re-program their action in a forcedchoice condition (like for the what-constrained and the whether-constrained conditions), as shown by their slower response times.This, in turn, could go at the expense of the SoA.Nevertheless, this interpretation needs further investigation.
A final remark concerns the role of the time delay.As from previous studies, Zanatto et al. (2021aZanatto et al. ( , 2021b) ) the length of the time delay between the action and its consequence seems to consistently affect the IB.Previous results supported the idea that each task could have a specific "window of opportunity" linked to the degree of difficulty of the task (Berberian et al., 2012).This is further supported by the present results.For each decision component, there seems to exist a specific time window in which SoA can flourish.Specifically, the time of action (like in the case of the When-constrained condition) can be manipulated for delays longer than 1000 ms.Differently, a forced-choice over what action to perform should be manipulated for delays longer than 2000 ms, while the handover task (similar to the whether-constrained condition) would need a 1500 ms delay.
The present study is particularly important in opening new avenues for investigations on the role of the human when using new technology.Specifically, the present result demonstrated that investigations of the SoA can be a useful way of testing the design of systems and interfaces.By simply decomposing the task that the user should perform, it would be possible to estimate how much control would be perceived by them, and in turn, recalibrate the system intervention on such bases.

Implications for systems design
Future design should carefully consider the nature of the task from a human perspective.The present results show that each type of task brings a different amount of control and a different time window for implementation.Therefore, in the future, each task should be carefully decomposed in terms of the decision-making process.In other words, designers should investigate what each task entails in terms of the decision to facilitate the persistence and increase of the user's control over the task (e.g., Is it a handover kind of task?Does it require the user to have no choice about which task to complete?Does the task require a timeframe?).

Limitations and further research
Future studies should replicate the current experiments using a more meaningful scenario.As a window of opportunity for SoA was found, this could also vary depending on the purposes and demands of the task.
Further, future studies should consider using such a paradigm to test variations of SoA among different levels of expertise (e.g., experienced users/operators).
Finally, the second experiment investigated how one single constraint can affect the SoA, thus leaving the user in command of two decisions.Future studies should investigate how double constraints could influence the SoA, thus leaving the user with one sole decision to take.

Research involving human participants and/or animals
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Figure 2 .
Figure 2. Estimation Score for each condition (constrained/voluntary) and Time Delay.Bars indicate the standard error of the mean.

Figure 3 .
Figure 3. Estimation Score for both active and passive trials and each time delay (500-2500ms).Bars indicate the standard error of the mean.

Figure 4 .
Figure 4. Estimation Score for each condition (Voluntary/What-constrained/Whether-constrained/When-constrained) and each time delay (500-2500 ms).Bars indicate the standard error of the mean.

Table 1 .
The frequency of keypress performed for each condition, the number of times each key was pressed, and the mean response times for Experiment 1.
Figure 1.Estimation Score for active and passive trials in each time delay.Bars indicate the standard error of the mean.

Table 2 .
The frequency of keypress performed for each condition, the number of times each key was pressed, and the mean response times for Experiment 2 are reported.