A rehabilitation robot control framework with adaptation of training tasks and robotic assistance

Robot-assisted rehabilitation has exhibited great potential to enhance the motor function of physically and neurologically impaired patients. State-of-the-art control strategies usually allow the rehabilitation robot to track the training task trajectory along with the impaired limb, and the robotic motion can be regulated through physical human-robot interaction for comfortable support and appropriate assistance level. However, it is hardly possible, especially for patients with severe motor disabilities, to continuously exert force to guide the robot to complete the prescribed training task. Conversely, reduced task difficulty cannot facilitate stimulating patients’ potential movement capabilities. Moreover, challenging more difficult tasks with minimal robotic assistance is usually ignored when subjects show improved performance. In this paper, a control framework is proposed to simultaneously adjust both the training task and robotic assistance according to the subjects’ performance, which can be estimated from the users’ electromyography signals. Concretely, a trajectory deformation algorithm is developed to generate smooth and compliant task motion while responding to pHRI. An assist-as-needed (ANN) controller along with a feedback gain modification algorithm is designed to promote patients’ active participation according to individual performance variance on completing the training task. The proposed control framework is validated using a lower extremity rehabilitation robot through experiments. The experimental results demonstrate that the control scheme can optimize the robotic assistance to complete the subject-adaptation training task with high efficiency.


Introduction
Due to the rapidly increasing number of physically and neurologically impaired patients around the world, rehabilitation robots have been developed to assist in the therapeutic training of impaired limbs, which improves rehabilitation efficiency and saves human labor through highly autonomous assistance (Xu et al., 2020a;Xu et al., 2020b).The control strategy of a rehabilitation robot significantly influences rehabilitation efficacy.Most clinical cases enable physiotherapists to feed the task trajectories into the robot controller before rehabilitation starts.However, patients can only modify the robot's current trajectory through physical human-robot interaction (pHRI) without affecting the future task trajectory.Particularly for patients with severe impairment, it is hardly possible to continuously exert adequate force to change the robot's movement trajectory for a period, and their recovery, comfort, and safety cannot be guaranteed accordingly.Therefore, online adaptation of desired trajectories to patients' performance is indeed necessary.Dynamic movement primitive (DMP) (Schaal, 2006) and central pattern generator (CPG) (Sproewitz et al., 2008) are two common tools for trajectory generation, and they have been applied in rehabilitation robotics research (Luo et al., 2018;Yuan et al., 2020).In (Xu et al., 2023), coupled cooperative primitives are formulated, where pHRI is expressed as a first-order impedance model and assumed as a modulation term in DMP.The problems existing in tuning the impedance model parameters have been explained above.The human-robot interaction energy is combined with adaptive CPG dynamics to plan gaits for exoskeletons (Sharifi et al., 2021); however, this introduces many uncertain parameters, the resolution of which is time-consuming.In addition, it is found that the robot's desired trajectory can be deformed in response to subject actions (Lasota and Shah, 2015), but it is unclear which deformed trajectory is optimal.The robot's future desired trajectory can be modified, and the optimal solution of the trajectory deformation is derived by detecting the humanrobot interaction force (Losey and O'Malley, 2018).Similarly, trajectory deformation is applied to robotic rehabilitation, where a position controller is adopted to track the deformed trajectory, ignoring the rehabilitation effectiveness of the AAN training (Zhou et al., 2021).Apart from movement trajectory planning, users' voluntary participation should be stimulated by adjusting the robotic assistance.
Additionally, passive control is usually employed to drive impaired limbs to move along the predefined task trajectory, where the active participation of patients cannot be encouraged to stimulate motor function recovery (Hogan et al., 2006).To overcome this problem, the assist-as-needed (AAN) control strategy is introduced to adapt the robotic assistance to the patients' performance (Marchal-Crespo and Reinkensmeyer, 2009).An impedance/admittance control scheme is a common solution for addressing the physical relationship between humans and robots.An impedance controller based on a virtual tunnel regulates the robotic assistance while responding to the tracking error between the current trajectory and the desired trajectory (Krebs et al., 2003).A torque tracking impedance controller is proposed for lower limb rehabilitation robotics to generate assistance while ensuring acceptable trajectory deviation (Shen et al., 2018).An admittance control incorporating electromyography (EMG) signals is developed to improve human-robot synchronization (Zhuang et al., 2019).Both impedance and admittance control can regulate the relationship between the trajectory deviation and interaction effect by tuning the inertia-damping-stiffness parameters.In fact, different patients or even the same patient at different rehabilitation stages can exhibit varying motor capabilities; so, accurate determination of these parameters is essential to realize AAN training for different subjects and tasks.Furthermore, as for practical application in rehabilitation, dynamic human force and uncertain external disturbances often occur and cannot be measured intuitively and accurately.Both inappropriate impedance/admittance parameters and unknown dynamic interactive environments can lead to unstable and oscillating robot behaviors, which may decrease motion smoothness and even threaten human safety (Ferraguti et al., 2019).Furthermore, current AAN controllers are designed to provide only the necessary robotic assistance to complete the prescribed training task, which is not suitable for encouraging patients to challenge themselves with more difficult tasks and stimulate their potential motor capabilities for improved rehabilitation efficacy.
In this article, a control framework is proposed for the simultaneous adaptation of training tasks and robotic assistance according to patient performance.The main contributions of this article are listed as follows.
(1) A trajectory deformation algorithm is developed to plan the robot's desired trajectory as the high-level controller, where the continuity, smoothness, and compliance of the robotic motion are achieved in response to the subject's biological performance.(2) An AAN control strategy with a feedback gain modification algorithm is employed to regulate the robotic assistance as the low-level controller.The controller is designed to encourage active participation by learning the patient's residual motor capabilities and accurately tracking the deformed trajectory.
(3) Both the training task and robotic assistance adaptation algorithms are integrated into a framework to realize humanin-the-loop optimization, and this control framework is validated using a lower extremity rehabilitation robot.
The remainder of the article is organized as follows.The biological signal processing is described in Section 2. The trajectory generation is presented in Section 3, and the subjectadaptive AAN controller is explained in Section 4. The proposed control framework is verified through experiments in Section 5. Finally, this study is concluded in Section 6.
The schematic view of the proposed control framework is presented in Figure 1, and the detailed explanation is elaborated in the following sections.

Biological signal processing
For rehabilitation, robotic motions should be regulated by adapting to the user's muscle strength, which can be derived from the human skin surface EMG signals.In this section, the EMG-driven musculoskeletal model for torque estimation is presented.EMG sensors (ETS FreeEMG300) are used to measure EMG signals, and the electrodes are attached to the relevant skin surface.Specifically, in the application of lower extremity rehabilitation, six electrodes are attached to gluteus maximus, semimembranosus, biceps femoris iliopsoas, sartorius, and rectus femoris for hip flexion/extension; six electrodes are attached to rectus femoris, vastus medialis, vastus lateralis, biceps femoris, semimembranosus, and semitendinosus for knee flexion/ extension.The raw EMG signals are sampled at 1,024 Hz, bandpass filtered from 10 Hz to 500 Hz, and then notch filtered at 50 Hz to remove noise.The muscle activation is calculated by the neural activation function as where u is the post-processed EMG value, R is the maximum voluntary isometric contraction, and A < 0 is a nonlinear shape factor, defining the curvature of the function.Subsequently, a Hill-typed muscle model is constructed to describe the relationship between muscle activation and muscle force (Yao et al., 2018).The force produced by the muscle-tendon unit F mt is given by the following set of equations: (2) where F mt , F m , F ce , and F pe represent the force generated by the muscle-tendon unit, the tendon force, contractile element, and passive element, respectively; l t is the length of the muscle tendon; α is a scaling factor; ∅ is the pinnation angle that is given by where ∅ 0 is the optimal pinnation angle and l 0 is the optimal length of muscle fiber.The scale factor α will be used in the optimization process.The human joint torque was produced by the coupling function of both the agonistic and antagonistic muscles, that is, where τ i F m i r m i and τ j F m j r m j denote the torques exerted by the agonistic and antagonistic muscles, respectively.F m i and F m j are the muscle-tendon forces, r m i and r m j are the muscle moment arms of the muscle-tendon unit and can be estimated by determining the muscle-tendon length l mt and joint angle q; that is, r m ∂lmt ∂q .The parameters J and L denote the number of agonistic and antagonistic muscles acting on the joint, respectively.
In the proposed EMG-driven musculoskeletal model, it is essential to determine the model parameters, i.e., the shape factor A and scaling factor α. Furthermore, the human joint torque during the exercise could be directly detected via AnyBody Modeling System (AMS) (Damsgaard et al., 2006).However, the torque generated by AMS is not continuous and realistic.The real joint torque can be measured through calibration experiments with EMG signals, which are quite complex and time-consuming.Therefore, it is proposed that the optimization is undertaken to adjust the EMGdriven musculoskeletal model parameters to minimize the difference between the torque estimated by the EMG signals τh and the torque detected via AMS τ AMS .
The optimization procedure is illustrated in Figure 2. The processed EMG signals are converted to muscle activation using (2) including the uncertain shape factor A. Then the muscle contraction model (Yao et al., 2018) is established to calculate the muscle-tendon force and the muscle torque through Equations 2-5, where the muscle-tendon length l mt and the moment arm r m can be obtained from AMS.On the other hand, certain driving motion is loaded to AMS and the human joint torque τ AMS can be achieved.The optimization aims to shrink the difference between τh and τ AMS .Select the parameters to be optimized as p [ A α ] T .The optimization problem is defined as (6).
where N is the number of samples.The Broyden-Fletcher-Goldfarb-Shanno algorithm (Peña et al.,  ), together with a penalty barrier algorithm, is employed to find the optimal parameters.

Trajectory adaptation
Prior to operating the rehabilitation robot, the training task needs to be predetermined by feeding the reference trajectory (task trajectory) into the robot controller, which is usually the natural gait trajectory of healthy subjects.The patient is then encouraged to complete the task with robotic assistance.Once the reference trajectory is preset and fed into the robot controller, however, it is not reasonable to maintain the task difficulty invariant throughout the rehabilitation procedure.In order to ensure the smoothness and compliance of the robotic motion, the robot's desired trajectory should be modified intuitively and continuously in response to the human force (Losey and O'Malley, 2018).In this regard, the physical human-robot interaction (pHRI) should alter not only the robot's current state but also its future behavior.In this section, a trajectory deformation algorithm is studied to explore the pHRI influence on the training task, and the modification is made on the original reference trajectory, generating the desired trajectory for the further controller design.
The reference trajectory (predetermined before the training) is defined as q * d , and the desired trajectory (altered during the training) is defined as q d .When the human-robot interaction torque τ h is exerted on the robotic joints at time t i , the original desired trajectory q * d starts to deform; such trajectory deformation ends at time t f , and accordingly, the duration of the trajectory deformation is p t f − t i .Moreover, the deformed trajectory between t i and t f can be evenly divided into an arbitrary number of waypoints, and the time interval between the consecutive waypoints is δ.Concisely, the deformation process can be expressed as q * d (t) → q d (t), where t ∈ [t i , t f ], and a diagram of the trajectory deformation is presented as Figure 3.It can be seen that both the magnitude and direction of the human-robot interaction torque influence the shape of the deformed trajectory.
The larger the force exerted on the robot, the greater the deviation between the original and deformed trajectories; conversely, smaller human force results in smaller trajectory change.Besides, human force with opposite direction can lead to alteration in the deformation direction.So, both the amplitude and direction of the trajectory deformation should be taken into consideration while proposing the trajectory adaptation method.

FIGURE 2
Optimization procedure of pHRI estimated from biological signals.

FIGURE 3
Diagram of the trajectory deformation.
Apparently, deformed from the reference trajectory may follow different curves to shape q d , and there are many possible trajectory deformations.Constrained over the time interval [t i , t f ], Γ s d (t) is defined as the deformation curve function, in which s is the deformation factor, and changing the value of s can derive different shapes of trajectories.When s 0, the segment of the desired trajectory between times t i and t f is represented as where γ d (t) q * d (t) when t ∈ [t i , t f ], and γ d is not defined outside this time interval.All other values of s refer to deformations of γ d .As shown in Figure 3, in the time interval [t i , t f ], the original trajectory γ d (t) can be deformed to Γ s1 d (t) and Γ s2 d (t).When the subject starts to exert force at time t i , the robot's desired trajectory is changed from γ d to γd when s 1, which is defined as Once γd is determined, the robot's desired trajectory is updated as q d (t) γd (t).After time t f , the robot follows its reference trajectory q * d again.Comparing ( 7) with ( 8), the vital factor that causes the trajectory deformation can be formulated as a vector field function Φ(t), which linearizes the dependency of Γ s d (t) on the deformation factor s, i.e., Γ s d t ( ) The vector field function V(t) is distributed along γ d and yields that Φ(t) in particular, when s 1, γd γ d + Φ(t).The determination of Φ(t) should ensure the continuity, smoothness, and compliance of the deformed trajectory.Specifically, the transition from q * d (t i ) to q d (t i ) and from q d (t f ) to q * d (t f ) should be as continuous as possible.Also, a minimum-jerk model (Li et al., 2017) is utilized to generate a smooth trajectory profile to guarantee patients' comfort and security.The vector field function Φ(t) is highly correlated to the human-robot interaction torque τ h , and a cost function is designed and minimized to optimize the deformed trajectory for high compliance.The detailed explanation of determining the vector field function is presented in Appendix 1, and the resultant formulation of the vector field function is obtained as where The parameter I ∈ R N×N in ( 12) is an identity matrix, where N denotes the number of waypoints.The determination of the matrix A and B is introduced in Appendix 1.The parameter H influences the shape of Φ, and it is formulated in (11).The parameter β ∈ R N in (10) is the prediction vector of the interaction torque, i.e., the future interaction torque can be formulated as βτ h (t i ), with τ h (t i ) being the interaction torque applied at time t i .The direction of the interaction torque is included in the prediction vector so that the proposed algorithm can address the magnitude and direction of the trajectory deformation well.Even so, when the human force direction is altered, the deformed trajectory is suggested to be recalculated with the updated prediction vector for delicate modulation of the training task.Besides, in robotics-assisted rehabilitation, the duration of pHRI is relatively long because the patient mostly tries to participate actively to guide the robot, which is different from the statement in (Losey and O'Malley, 2018).The parameter μ in (10) denotes the assistance level, and it can be tuned to arbitrate between human and robot.When μ increases, the induced trajectory deformations arbitrate toward the human, which means smaller input forces cause larger deformations; and vice versa.Herein, the assistance level is formulated as μ τ h /τ h , where τh is the expected human joint torque to complete the task trajectory in the absence of robotic assistance, and it can be obtained from Section 2.
Combining ( 9), ( 10), (11), and ( 12), the relationship between the vector field function and the interaction torque is clarified, and the deformed trajectory is thus obtained as After γd is derived from ( 13), q d is updated to include γd , and the process iterates at the next trajectory deformation when pHRI occurs again.

Assist-as-needed controller
Based on the abovementioned trajectory generation scheme, an actuation controller needs to be designed to track the desired trajectory.More importantly, in response to various motor capabilities of different patients, a subject-adaptive controller is required to realize AAN training.In this section, an AAN control strategy along with a feedback gain modification algorithm is proposed to complete the training task motion and provide the minimum required assistance to encourage patients' active engagement.
The robot dynamics in joint space can be presented as follows whilst considering pHRI.
where q ∈ R n (n denotes the number of the robotic joints) is the position coordination of the robotic joints, and accordingly, _ q and € q denote the joint velocity and acceleration, respectively.The parameter M(q) ∈ R n×n is the inertia matrix, C(q, _ q) ∈ R n×n is the centripetal and Coriolis matrix, G(q) ∈ R n is the gravity torque, f( _ q) ∈ R n is the friction, τ dis ∈ R n is the external disturbance, τ act ∈ R n is the robotic joint torque generated by the actuators, and τ h ∈ R n is the human-robot interaction torque.
Although the robot dynamics have been modeled as ( 14), it is impossible to accurately formulate disturbances that may decrease the compliance of robotic motion and the safety of pHRI.The total disturbances τ d include the estimation error of the human force (τ h − τh ), external disturbance (τ dis ) and unmodeled dynamics.So, the dynamics ( 14) can be rewritten as where M(q), Ĉ(q, _ q), Ĝ(q), and f( _ q) are the estimation of M(q), C(q, _ q), G(q), and f( _ q), respectively; τ d ∈ R n denotes the total disturbances.The position tracking error is defined with respect to the desired trajectory as q q − q d , and the sliding variables are defined as where Λ is a constant.In order to help subjects complete the desired tasks while providing the minimum required assistance, the AAN controller for the robotic actuation can be presented as where M, Ĉ, Ĝ, and f are the estimation of M(q), C(q, _ q), G(q), and f( _ q), respectively, and K D ∈ R n×n is a positive-definite feedback gain.The selection of the parameter K D will be elaborated in the subsequent feedback gain modification algorithm.
Through the stability analysis in Appendix 1, we can conclude that the proposed control system yields a tracking error with uniformly ultimately bounded stability.The ultimate bound on the tracking error e can be expressed as B e , and its formulation is given in Appendix 1.Should M→ 0, C→ 0 and τ d → 0, the appended analysis demonstrates that e → 0, and the system proves globally asymptotically stable.The inequality (39) concludes that the trajectory tracking error r is uniformly bounded, and, more importantly, this bound can be explicitly calculated in the following.The Lyapunov function can be basically bounded as α 1 ( e ) ≤ V ≤ α 2 ( e ), where α 1 (•) and α 2 (•) are certain functions and will be defined later.Then, the ultimate bound B e on the tracking error e can be defined as where z l is the limiting term that satisfies _ V < 0∀ e ≥ z l ≥ 0. Since the inertia matrix M is positive-definite and bounded, the subsequent inequality can be derived as where M and M are the minimal and maximal eigenvalues of the inertia matrix M, respectively.It should be noticed that the left and right sides of (40) correspond to α 1 ( e ) and α 2 ( e ), respectively.By adopting the right side of (39) as the limiting term z l and substituting the inequality (42) into (41), the bound on the tracking error can be calculated as Noticeably, the feedback gain K D is included in the formulation of the bound B e , which means that the bound on the allowable trajectory tracking error can be manipulated by directly varying the value of K D , and the amount of robotic assistance can be consequently adjusted.Although adequately large values of K D results in minimal bound on tracking error, perfect tracking effect is not desirable to stimulate patients' potential motor capabilities (Pehlivan et al., 2015).Appropriate allowable tracking error can facilitate improved rehabilitation efficacy, especially aiming to promote the patient's active participation.The increased or decreased value of K D is suitable in the following practical situations.
Situation 1: When the patient with severe motor disability has difficulty in completing the training task or learning a motion, increasing the value of K D leads to reduced allowable tracking error, and larger robotic actuation is provided for assistance.
Situation 2: When the patient attempts to stimulate muscle strength to challenge themselves with more difficult tasks, decreasing the value of K D leads to increased allowable tracking error, and smaller robotic actuation is provided to spare more space for the patient's effort.
Therefore, the selection of the feedback gain K D plays an important role in addressing the trade-off between accurate trajectory tracking and sufficient participation encouragement.To solve this problem, a feedback gain modification algorithm is put forward to render patients complete and even challenge the task according to their residual motor capabilities and motion intention.
A parameter e* is introduced to define the maximum allowable trajectory tracking error.The average tracking error in a certain task is recorded as e i with the feedback gain K D,i , which will be updated as K D,i+1 in the next task based on the patient's performance.The performance metric is the human-robot interaction torque to evaluate voluntary movement ability.The muscle activation in the current task can be normalized as u i |τ h,i |/|τ h,i |, where τ h,i denotes the average interaction torque in the current task, and |τ h,i | denotes the human's joint torque to complete the task in the absence of robotic assistance.Similarly, the human's performance in the prior task can be expressed as u i−1 |τ h,i−1 |/|τ h,i |.Comparison of the human's performance in the current task to the previous task is considered the variance tendency of the subject's motor capability.Concretely, if u i < u i−1 , the patient shows a downward tendency in muscle strength stimulation, the future feedback gain K D,i+1 should be increased to meet Situation 1; otherwise, if u i > u i−1 , the patient shows an upward tendency in rehabilitation efficacy, and the future feedback gain K D,i+1 should be decreased to meet Situation 2. The updating of the feedback gain occurs at the end of each task trajectory, and it conforms to the following law where ϖ i is the change rate and satisfies −1 < ϖ i < 1.Specifically, ϖ i ∈ (−1, 0) means decreasing the future feedback gain with respect to the current one, whereas ϖ i ∈ (0, 1) means increasing the tendency.The formulation of the change rate is defined as where ϖ nom is the nominal change rate and is predetermined as a constant to limit the maximal tracking error to less than e*.The sign of ϖ i depends on the variance tendency of the subject's motor capability.For instance, if u i is larger than u i−1 , ϖ i ∈ (−1, 0), the algorithm dictates that the subject has the potential to exhibit better performance in the next task, and the feedback gain decreases for larger error bound.Conversely, if u i is larger than u i−1 , ϖ i ∈ (0, 1), the algorithm dictates that the subject fails to complete the current task with improved voluntary muscle strength, and the feedback gain increases for more assistance in the next task.The magnitude of ϖ i is decided by both the maximum tracking error and the performance variance.
Combining the AAN controller (17) with the feedback gain modification algorithm ( 21) and ( 22), the control framework can provide a highly efficient and autonomous training strategy for robot-assisted rehabilitation.

Experiments
In order to validate the proposed control framework, the lower extremity rehabilitation robot mentioned in Xu et al. (2019), Xu et al. (2021) was utilized to conduct a series of experiments.The experiments were carried out on three healthy subjects.All subjects were informed of the detailed operation procedures and potential risks and signed consent forms before participation.The experiments were approved by the ethics committee of Hefei Institutes of Physical Science, Chinese Academy of Sciences (approval number: IRB-2019-0018).Two DOFs of the robot, including hip flexion/extension and knee flexion/extension, were involved in the training.Before operation, the reference trajectories were prescribed by physiotherapists to ensure rhythmic and comfortable training motion, and they were then fed into the robot controller.During the training, the subjects were asked to track the reference trajectories with the assistance of the rehabilitation robot actuation.Once the interaction force exerted by the subjects was detected by sensors, the reference trajectory was deformed to generate another optimal desired trajectory, and the robot was controlled to cooperate with the subject to complete the modified task motion.It should be noted that the subjects were not allowed to voluntarily move in the opposite direction from the task trajectory for accurate calculation of the deformed trajectory and safety guarantee.
Three groups of experiments were performed for the three subjects, and three different reference trajectories were configured.The time interval between the consecutive waypoints was set at δ 0.01 s, and the prediction vector of the interaction torque was set at β 1.The vector field function was updated four times for hip flexion, hip extension, knee flexion, and knee extension in one walking cycle, where the computation efficiency was adequate to ensure instantaneous and accurate trajectory deformation.The amount and variance of pHRI differed across the three subjects due to their individual motor capabilities and motion intentions.The experimental results are shown in Figures 4-6.The subfigures A and B demonstrate the trajectory deformation and tracking of the hip and knee joint, respectively.The variance of the interaction torque at the hip and knee joints is illustrated in subfigure C, and the robotic actuation torque is presented in subfigure D. The experimental results indicate that the proposed trajectory generator can continuously produce a smooth and optimal desired trajectory once the subject exerts force on the robot.When the interaction torque disappears, the desired trajectory gradually converges back to the predetermined reference trajectory.In this regard, the shared control between the robot's desired trajectory and the human's voluntary effort is realized.Additionally, based on the proposed AAN controller, the actual trajectory output from the robot actuation can track the desired trajectory well.
In order to exhibit the control performance more intuitively, quantitative evaluation with three metrics was conducted.In terms of trajectory smoothness, the dimensionless squared jerk (Hogan and Sternad, 2009) was adopted, and its definition is presented in (23).A smaller DSJ value indicates a smoother movement trajectory.As for the compliance assessment, the energy per unit distance (EPUD) (Lee et al., 2018) was selected as (24).When improved compliance was shown, the subject could drive the robot with less interaction torque.A smaller EPUD value indicates higher robot compliance.The root mean square error (RMSE) defined in ( 25) was utilized to reveal the position error between the desired trajectory and actual trajectory.A smaller value of RMSE indicates better tracking effect.The three metrics are formulated as follows.
In ( 23), the parameters t a and t b are the start and end time of the trajectory, q max is the maximum amplitude of the trajectory, and q ... (t) is the third time-derivative of the trajectory.In (24), j 1, 2, . . ., N is the sample number, τ h (t j ) is the human-robot interaction torque at the time t j , and Δd(t j ) is the deviation between the reference trajectory and desired trajectory at the time t j .In (25), q(t j ) and q d (t j ) are actual and desired trajectory at the time t j , respectively.
Additionally, to better manifest the advantage of the proposed control framework, comparison experiments were performed with an admittance control without trajectory deformation and feedback gain modification algorithms (Li et al., 2017).The admittance control was implemented with the same robot and subject.The admittance parameters were regulated while responding to the subjects' biological actions.The trajectory deformation and tracking performance of both control systems were evaluated with the abovementioned three metrics.Additionally, in order to evaluate rehabilitation efficacy, muscle activation improvement was normalized and recorded, and the Fugl-Meyer assessment (FMA) was also deployed for clinical evaluation.Higher normalized EMG value and FMA score indicate rehabilitation improvement.Each trial was conducted three times for accuracy, and the mean values of these metrics are recorded in Table 1.The robot motion compliance and movement smoothness of the hip and knee joint trajectories generated by the deformation trajectory algorithm are much better than those with the admittance control.Furthermore, the tracking performance under the feedback gain algorithm proved more satisfactory compared to the admittance control.The enhancement of the muscle strength and clinical assessment scores is evident compared with the performance without the proposed control framework.Overall, the comparison results prove that the control framework can effectively help the patient learn to move in the proper trajectory, and the training becomes more challenging and brings better rehabilitation efficacy.Frontiers in Bioengineering and Biotechnology frontiersin.org08 Xu et al. 10.3389/fbioe.2023.1244550Next, the feedback gain modification algorithm for the AAN controller was experimentally examined during the optimized training task.The subjects were required to voluntarily exert forces on the robot, and the feedback gain K D was adapted according to the subjects' performance, producing subjectadaptive robotic assistance.At the end of each training task, questionaries were set and filled in to identify whether the current task was easier or more difficult compared to the previous task.The questionnaire responses only helped assessments of the pilots' subjective intention without affecting the robot controller.After that, the subsequent training task was operated immediately.The tracking performance of the AAN controller, variance of the feedback gain K D , and the human-robot interaction torque were measured in real time and are depicted in Figure 7.It can be seen from the figure that the feedback gain and tracking error are functions of the interaction torque.The experimental results demonstrate that the feedback gain K D can respond correctly to the subjects' motor capabilities.When subjects complete the task with more active involvement, K D decreases and the allowable trajectory tracking error increases, and the magnitude of robotic assistance decreases for further encouragement, and vice versa.The variance tendency of K D is consistent with the questionnaire results.Furthermore, no matter how the tracking error varies, the user-selected maximum allowable trajectory tracking error r* is always larger than or equal to the error

Conclusion
In this paper, a control framework is proposed for the simultaneous adaptation of training tasks and robotic assistance for robot-assisted rehabilitation.Specifically, a trajectory deformation algorithm is developed to enable pHRI to regulate the task difficulty in real time, generating a smooth and compliant desired trajectory.Furthermore, an AAN controller, along with a feedback gain modification algorithm, is designed to motivate patients' active participation, where the robotic assistance is adjusted by evaluating the patients' performance variance and determining the trajectory tracking error bound.Appropriate training difficulty and assistance level are two important issues in robot-assisted rehabilitation.In this study, the appropriate training difficulty is expressed in the form of making a proper trajectory, which is realized with the proposed trajectory deformation algorithm; and the appropriate assistance level is expressed in the form of increasing the user's EMG level, which is realized with the proposed AAN controller with the feedback gain modification algorithm.The balance between these two issues is essential for better rehabilitation efficacy, and the proposed control framework can address this balance well.A lower extremity rehabilitation robot with MR actuators is then employed to validate the effectiveness of the proposed control framework.Experimental results demonstrate that the training task difficulty and robotic assistance level can be regulated appropriately according to subjects' changing motor capabilities.
In future work, more novel methods will be explored to estimate human motor capabilities and improve pHRI control strategies.More diverse training tasks will be involved to meet the rehabilitation requirements of different degrees and types of impairments.Machine learning may be adopted to ensure better time efficiency and greater adaptability of robotic assistance modification.Furthermore, more clinical trials will be carried out to expand the proposed control framework into clinical application.

FIGURE 1
FIGURE 1Control framework of simultaneous adaptation of training tasks and robotic assistance.

FIGURE 4
FIGURE 4 Experimental results of subject 1. (A) Trajectory deformation and tracking of the hip joint.(B) Trajectory deformation and tracking of the knee joint.(C) Interaction torque at the hip and knee joints.(D) Actuation torque at the hip and knee joints.

FIGURE 5
FIGURE 5 Experimental results of subject 2. (A) Trajectory deformation and tracking of the hip joint.(B) Trajectory deformation and tracking of the knee joint.(C) Interaction torque at the hip and knee joints.(D) Actuation torque at the hip and knee joints.

FIGURE 6
FIGURE 6 Experimental results of subject 3. (A) Trajectory deformation and tracking of the hip joint.(B) Trajectory deformation and tracking of the knee joint.(C) Interaction torque at the hip and knee joints.(D) Actuation torque at the hip and knee joints.

TABLE 1
Comparison results.Therefore, it can be concluded that the feedback gain modification algorithm can effectively adjust the robotic assistance according to the subjects' changing performance, hopefully encouraging impaired patients' active participation and facilitating rehabilitation efficacy.