Standing, Walking, and Sitting Support Robot Based on User State Estimation Using a Small Number of Sensors

With the aging of the population and the consequent severe shortage of caregivers, the demand for care robots to assist the elderly is increasing. However, care robots have yet to be widely adopted owing to cost constraints and anxiety issues due to several factors. For instance, care robots are required to have higher functionality than general care devices. It is important to provide both massive power and the appropriate support for the user’s state. However, this requires more sensors to obtain detailed information for user-state estimation and more actuators for physical support, increasing the cost and risk of failure. In a system that has many sensors and operates based on detailed data, the problem of user privacy also emerges. The risk of personal information leakage and the feeling of being monitored increase user discomfort. To support standing up and prevent falling during walking, care robots are required to apply power to the user according to the user state. The position of the center of gravity (CoG) has been used for such state estimation; however, many sensors are required to determine the accurate CoG position. To reduce the number of sensors required for user state estimation, we proposed a method for calculating CoG candidates, and validated the proposed method via experiments. Previous studies have focused solely on normal standing-up motion. However, in daily activities, standing up, walking, and sitting down are a set of motions. In addition, it is not always true that the care robot user can move normally; hence, anomaly detection is beneficial in care robots. Therefore, it is important to estimate the user state considering not only standing-up motion, but also walking and sitting down, as well as any anomaly that may occur during these motions. In this study, we develop an elderly support system that can assist in standing, walking, and sitting based on user state estimation. The CoG candidate calculation method is improved for walking and stand-to-sit movements, and an anomaly detection method using CoG candidates is also proposed. The care robot is designed to be user-driven and provide support for persons with insufficient strength based on state estimation. The experiments verify that the developed system can constantly monitor the user’s state and support a series of movements, such as standing up, walking, and sitting down, with a single robot.


I. INTRODUCTION
T HE aging of the population causes a severe shortage of caregivers. Care robots are garnering attention as a potential solution to support the increasing number of people who require care and to reduce the burden on caregivers. Various types of care robots have been studied to support body movements of the elderly. Several systems exist to support standing by lifting the user's body with a lift [1] or by using a humanoid robot [2] or wearable device [3]. Some robot systems support walking with a cane [4], [5], walker [6], [7], [8], or wearable device [9], [10]. Although these robots can provide specialized support for each movement, multiple robots are expensive and space-consuming to install. In addition, standing up, walking, and sitting down are a series of actions, and it is inconvenient to use different robots for each action. Therefore, several systems that can support these actions with a single robot have been developed [11], [12], [13]. However, robots that can support multiple actions tend to be large in scale. In addition, excessive support from a large-scale system decreases users' sense of agency. Therefore, a compact robot is required to support a series of movements, such as standing up, walking, and sitting down, without impairing the user's sense of agency. The characteristics of these care robots for general welfare devices include their ability to support the user by massive power, recognize the environment or users through sensors, and conduct appropriate motions accordingly [6], [14]. Several studies exist on care robots that can estimate user states by utilizing force sensors, laser range finders (LRFs), or cameras [15], [16]. These robots can provide better support and reduce the burden on caregivers. As the population ages and the need for support increases, it is becoming more important to have robots that can aid in a series of motions to provide appropriate support according to the user state.
Care robots have not been widely used, despite the demand for them and their benefits. A primary reason for this is the challenge posed by cost. According to a special public opinion poll conducted by the Cabinet Office, Government of Japan in 2013, 68.6 % of the respondents indicated that "low price" was an important factor to consider in the application of care robots [17]. A system that can support a series of motions using a single robot is superior in terms of cost. However, it is impractical for general household use because of the cost or large number of sensors used to estimate the user state. Although the price of a single sensor is expected to become cheaper with time, the use of more sensors will result in a more costly system. Therefore, to significantly reduce the cost, a method that can estimate the user state even with a small number of sensors is required. In addition, the failure rate of the system increases with an increase in the number of sensors. Considering maintenance and repair, it is desirable to have few failures, especially in the case of care robots, because engineers with expertise are not always available. In addition, obtaining a large amount of information from sensors raises privacy concerns. Moreover, the feeling of being monitored may increase user discomfort. Additionally, there is a risk of unexpected leakage of the user's personal information [18]. Therefore, we focused on the position of the center of gravity (CoG) and proposed CoG candidate calculation as a method for recognizing the robot user using a small number of sensors [19], [20]. The number of sensors can be reduced by focusing on the position of the CoG rather than by comprehensively measuring the entire body. The CoG position can be adopted as an indicator of a person's posture and is also used to evaluate the possibility of falling. Accordingly, the CoG position has been used for gait analysis by utilizing a motion capture system and force plate, and inertial measurement units (IMUs) have also been adopted [21]. In the case of care robots, they are measured by LRFs, position-sensitive detectors (PSDs), and wearable sensors such as IMUs [22], [23], [24], [25]. However, the CoG position cannot be accurately obtained using a small number of sensors. Hence, we conducted research based on the idea that the state of the robot user can be estimated based on the calculation of CoG candidates, provided that the candidates can be calculated within a certain narrow range, even if the exact CoG position is unknown [19]. The state estimation between sitting and standing and the leaning estimation method using the CoG candidates were proposed in [20] and [26]. However, these methods solely focus on sit-to-stand movements. In addition, they were designed to operate normally and did not consider the case where the user went into an abnormal state during the process.
In this study, we also focused on walking and stand-to-sit movements, as well as anomaly detection, and proposed a care robot system that can support a series of motions, such as sit-to-stand, walking, and stand-to-sit, with a single compact robot. A novel CoG candidate calculation method for walking and stand-to-sit motions, as well as state estimation and anomaly detection, was proposed. The proposed methods were implemented in the developed care robot and validated via experiments.

II. NOVEL COG CANDIDATE CALCULATION METHOD USING KNEE POSITIONS
Focusing on care robots and usable sensors, the sensor arrangement and a CoG candidate calculation method were proposed and validated in previous studies [19], [20]. This section explains the care robot and the CoG candidate calculation method previously proposed in [19]. Subsequently, a novel CoG candidate calculation method was proposed in Section II-C.

A. DEVELOPED CARE ROBOT
The care robot adopted in this study is illustrated in Fig. 1. The specifications of the robot are presented in Table 1. The adopted robot is a walker-type robot with an armrest, two wheels with brakes, and four casters. The user sits or stands behind the robot, positions both arms on the armrest, and grabs the grippers.
Excessive support from a large-scale system increases muscle weakness and dissatisfaction owing to a decreased sense of agency. Therefore, a system that allows users to take the initiative in exercising while providing support where they lack strength is appropriate for elderly care. When the user leans forward against the care robot, the armrest moves up and down to support standing up and sitting down. The robot is required to operate according to the state transition. In the following sections, the CoG candidate calculation method is elucidated as a basis for state estimation.

B. FUNDAMENTAL CONCEPT OF COG CANDIDATE
The fundamental concepts of the sensor arrangement and CoG candidate calculation were proposed in [19]. First, we consider a sagittal plane 6-link human model, as illustrated  in Fig. 2. The link lengths are assumed to be known because they can be measured before using the care robot. The CoG position can be calculated by adopting the link mass ratio, provided that the positions of all the links can be determined. Conversely, the CoG position cannot be determined uniquely if there are fewer sensors than required. The sensors that are considered to be adoptable in care robots are listed, and measurement sets are developed as combinations of these sensors. These measurement sets are classified using the number of unknown parameters of the link model, as presented in reffigfig:MeasurementsSets. The black points and white squares represent the positions of the points and angles of the joints, respectively. These data can be acquired using simple sensors, such as distance sensors and angle meters. The candidates of the human CoG can be calculated by considering the ranges of the unknown parameter values. The calculation procedure for the measurement set 1a is presented in Fig. 4. Only the positions of the forearm and foot links are identified in this pattern. The positions of the shoulder joint candidates can be calculated by considering the rotation range of the elbow, as illustrated in Fig. 4 (a). Focusing on a shoulder candidate, the corresponding hip joint candidate position can be calculated by using the position of one body link point and the length of the body link, as shown in Fig. 4 (b). The corresponding knee candidate can also be calculated, as presented in Fig. 4 (c). Subsequently, the positions of all the joints are determined or candidates are calculated; hence, the corresponding CoG candidate can be obtained, as illustrated in Fig. 4 (d). By repeating this procedure, all CoG candidates can be calculated, as shown in Fig. 4 (e).
We also proposed another method for calculating the CoG candidates with fewer sensors for standing motions in [20]. In this method, the ankle position is assumed to be immobile during any movements. By assuming the ankle position as a candidate beforehand, information on the foot position is not required, and the number of sensors required is reduced. Although the CoG candidates can be calculated with a small number of sensors by adopting this method, it is difficult to apply this method to walking and anomaly detection. The accuracy of the CoG candidate, which is calculated with this assumption, decreases because the legs move during these motions. The CoG candidates can be more accurate and can be adopted for user state estimation in walking and anomaly detection by measuring leg data. Hence, the care robot can support these motions by using a novel CoG candidate calculation method that utilizes leg data. In the next section, a novel sensor arrangement and CoG candidate calculation method suitable for state estimation during walking and anomaly detection are proposed. The proposed method requires a small number of sensors, which is equal to that of [19], and is fewer than the sensors required to determine the accurate CoG position.

C. NOVEL COG CANDIDATE CALCULATION USING KNEE POSITIONS
This section presents a novel calculation method of CoG candidates using knee positions. The measurement set 1a exhibited optimal results in [19]; hence, we adopted it as a reference to determine the novel sensor arrangement. The information on the lower limbs is important for state estimation during walking. From the analysis of the robot user motions in [20], it was determined that the ankle position did not move significantly during standing and sitting movements. Therefore, ankle information is not optimal for a welfare system that supports standing, walking, and sitting with a single robot. Knee position is more important during standing   [20]. and sitting, and is also considered to be equally important during walking. Therefore, in terms of the accuracy of the CoG candidates, it is considered that measurement sets that include knee position data are suitable for state estimation during walking and anomaly detection. As presented in Fig The procedure for CoG candidate calculation is presented in Fig. 6. The forearm position is identified as the same as the measurement set 1a. The shoulder candidates are calculated by adopting the rotation range of the elbow joint, as shown in Fig. 6 (a). The corresponding hip candidate can be calculated by focusing on one shoulder candidate, as presented in Fig.  6 (b). The candidates of the ankle joint can be calculated by adopting the rotation range of the knee, as illustrated in Fig. 6 (c). Note that the joint position candidates for which other joint angles or link lengths are out of range during the calculation process are excluded. The CoG candidate can be determined for each combination of candidate joints, as presented in Fig. 6 (d). All CoG candidates can be obtained by performing the same calculation for all combinations, as illustrated in Fig. 6 (e).

III. STATE ESTIMATION AND VALIDATION EXPERIMENT USING MEASURED DATA
This section explains each state estimation method and validation experiment that adopts measured data. The care robot, which is explained in Section II, operates according to the user state. If the user is sitting, the robot will be used by the user to stand up. Hence, the robot discriminates between the normal sitting posture and forward leaning posture, which is the preliminary stage for standing. Using the leaning estimation, the robot can determine when to assist the user in standing up. During standing up and sitting down motions, the robot estimates whether the user is in an abnormal state. When the user is in a standing position, the robot first estimates whether the user is about to fall. If the user is in a normal standing state, the robot discriminates whether the user is leaning forward, similar to the sitting position case. The state transition of the robot user is illustrated in Fig. 7. The robot function does not differ between normal standing and walking; thus, the robot does not discriminate between them.
The state estimations adopt CoG candidates as the input for a support vector machine (SVM). Four participants (Participants A-D) conduct predefined motions 11 times, data for 10 times are used to learn the model, and the other data are adopted for validation. The participants are young, 166-172 cm tall, weigh 50-63 kg, and none of them have physical disabilities. Even though different users have different body parameters and different limb movement conditions, the proposed method is not affected by the user's physique. One reason is that the tendency of the CoG to move during movements is common. The other reason is that the SVM model is made for each user. Therefore, state estimation is performed according to the characteristics of each user. Although the actual users of care robots are elderly people, it is assumed that they can be simulated by healthy young people for the above reasons. Informed consent was obtained from all participants prior to the experiments. The experiments have been reviewed and approved by the Research Ethics Review Board of Toyohashi University of Technology. The detailed explanation of each state estimation is presented in the following sections.

A. LEANING ESTIMATION
The robot supports standing up and sitting down by moving its armrest when the user is leaning. Hence, leaning estimation is important to start moving the armrest. We adopt an estimation method that we previously proposed in [20], [26]. A few geometric features of the CoG candidates are set as the features of the SVM to estimate the user state. Four participants perform three motions-sit-to-stand, walking, and stand-to-sit-11 times, each using a care robot. In this study, walking is considered as normal standing state because the robot function is the same. The robot calculates the CoG candidates from the sensor data and the features for the input of the SVM, which are adopted for state estimation. Learning models are created using data obtained 10 times for each action per person, and state estimation is performed for each of the remaining times. From the measurement data, the data for normal sitting and sitting with leaning forward states and the data for normal standing and standing with leaning forward states are used to estimate leaning during sitting and standing, respectively.
The examples of state estimation results are presented in Fig. 8. Green, yellow, blue, and light blue areas are the phases when the user is in normal sitting, sitting and leaning, normal standing, and standing and leaning states, respectively. Purple points represent the estimated state. The purple points located at the bottom and top indicate that the estimated state is sitting and leaning, respectively. The estimation is almost accurate, and incorrect estimations occur only in the vicinity of the boundaries between states. This indicates that the timing of the state transition is determined slightly earlier or later. The time errors are −0.4 s and −0.3 s for sitting and standing, respectively. The negative and positive numbers represent the early errors of the transition time and late errors, respectively. Because these motions are continuous, it is difficult to determine the exact timing of the state estimation. Hence, the results are considered practical, and the errors are considerably small for care robot operations. Fig. 9 presents the leaning estimation results of participant D. The participant's leaning is estimated slightly earlier than the actual leaning. In addition, it is estimated that the participant returns to a normal sitting position midway. The video confirms that the participant posture is ambiguous at that time. The boundary between sitting and leaning is not clear while the body is moving in a leaning motion. Because human motion is continuous, this can occur in other state transitions. Therefore, it is possible that the estimation results may move between the two states in a short time. This challenge can be addressed by deciding on a state transition when a different state continues for a while.
The results of the other participants are similar to those of participant A. The average time errors are −0.775 s and −1.1 s in the sitting and standing positions, respectively. A negligible state estimation error exists; however, it is sufficiently small, such that the robot can be operated without any problem by adopting verbal guidance when it is operated. The effectiveness of verbal guidance is elucidated in [26]. VOLUME 4, 2016 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.  These results indicate that state estimation can be performed even with the novel CoG candidate calculation method.

B. ANOMALY DETECTION WHILE STANDING UP AND SITTING DOWN
The preceding sections describe the state estimation when the user is normally applying the previously proposed method. In this section, the anomaly detection method for standing up and sitting down when applying the proposed state estimation is explained. If the user is unable to stand up/sit down properly, it is considered an abnormal condition in standing/sitting. When the hand or arm is removed from the armrest, each sensor can detect the anomaly alone. Therefore, we assume that the anomaly during the standing up and sitting down states occur when the user's arms are on the armrest, and only the armrest rises or descends, respectively, as illustrated in Fig.  10. After leaning forward, the participants deliberately raise and descend their arms on the armrest without standing up or sitting down 11 times each. The measurements indicate that when the armrest is still in the beginning stage of movement, the knees do not begin to flex or extend, even when the person is standing or sitting normally, which is the same as when only the armrest moves up and down. Therefore, to create the learning model, only the data obtained after the armrest has moved to some extent is adopted as the data of the abnormal state. The data of the normal state is used during the normal standing and sitting movements while the armrest is in motion. The data is trained 10 times for each participant in the same manner as described above. One movement datum for each normal and abnormal standing and sitting position per participant is adopted as the validation data. Examples of anomaly detection results in standing and sitting conditions are presented in Fig. 11 and Fig. 12, respectively. The orange, red, purple, and pink areas represent the phases when the user is rising normally, only the armrest is rising, the user is descending normally, and only the armrest is descending. No inaccurate estimation exists, except the state transition time error. The time errors are +0.1 s and +0.3 s in the rising and descending cases, respectively. Because the anomaly can be detected at an early stage, the care robot is expected to handle it before the user is in an uncomfortable posture.
The results obtained for the other participants are similar to those for participant A. The average time errors are +0.325 s and +0.2 s in the rising and descending cases, respectively. These results indicate that CoG candidates can be used for anomaly detection during standing and sitting.

C. ANOMALY DETECTION DURING WALKING
This section explains the anomaly detection when the user is walking. As in the cases of standing up and sitting down, when the arm leaves the armrest, the sensor alone can determine the anomaly; hence, we consider the anomaly when both arms are on the armrest.
When the robot user is unable to move their legs properly while walking, the body of the care robot and the user may be separated, and the user may fall, as illustrated in Fig. 13.  Therefore, we consider a case in which the user's legs cannot keep up with the care robot while it is in front as an abnormal condition, as it is a previous falling step. Simulated situations in which the participant stops walking in the middle of a walk, and the care robot takes the lead, are measured 11 times. Similar to Section III-B, the data for anomaly and normal standing are trained 10 times for each. Participant A's results for the validation with the data of each of the remaining normal and abnormal conditions are presented in Fig. 14. The blue and gray areas represent the phases when the user is standing and about to fall, respectively. The time error of Participant A is −0.2 s. The estimation results are not significantly different from those obtained with the human eye. The results for the other participants are similar, and the average time error is −0.5 s. These results indicate that CoG candidates can be used for anomaly detection during walking.

IV. IMPLEMENTATION AND VALIDATION EXPERIMENT USING CARE ROBOT
The aforementioned estimation models are implemented in the care robot. The robot has three distance sensors and four touch sensors. Based on the results obtained from the state estimation, the robot switches the assistive function appropriately, terminates the actuators operation, and applies the brakes when an abnormal state occurs. By terminating the robot's assistive actions at an early stage of the anomaly, the user is expected to return to a normal state. The user state transition and corresponding assistive functions are presented in Fig. 15. When the user is sitting, they grab and place their forearms on the armrest and lean their upper body toward the armrest for the standing up movement. The care robot raises the armrest after the countdown when the user is leaning, and then the user's body is lifted. The countdown is designed according to [26], [27]. The robot determines whether the user is standing up normally or if only the armrest moves up, as the anomaly detection during the armrest increases. When an anomaly is detected, the armrest stops moving, and the wheels brake. When the user is standing, the robot detects the anomaly when the user's legs are separated from the robot. If an anomaly is detected, the wheels of the robot brake to prevent the user from falling. The brakes are released when the user returns to a normal state. In addition, the leaning detection is performed if the user is standing normally. However, because it is close to the leaning state immediately after standing up, and it is unlikely that the user will sit down again immediately after standing up, the leaning estimation begins 2.0 s after standing up. This is also the case for the sitting state. When the user is standing and leaning forward, the armrest descends after the countdown for sitting down. The robot determines whether the user is sitting down normally or whether the armrest is solely descending, as the anomaly detection during the armrest is descending; accordingly, the armrest stops descending when the anomaly is detected. To address the disturbance in the sensor values, the previous value is used when there is a VOLUME 4, 2016 sudden change in the sensor values. In addition, considering the possibility of incorrect estimation, the robot recognizes a state transition and activates the actuator when the same state continues for 0.3 s.
We implement these functions in the care robot and conduct experiments to validate their effectiveness. We assume a series of actions: standing up from a chair, walking forward, and sitting on another chair. The overview of the experiment is presented in Fig. 16. Based on this behavior, the following four conditions are tested: • Perform all actions normally and continuously.
• Cannot keep up with the care robot and about to fall over in the middle of walking. • Cannot stand up normally, and only the armrest rises. • Cannot sit down normally, and only the armrest descends.
The scenes of the experiments are presented in Fig. 17-Fig. 20. Participant A conducts a total of 40 experiments 10 times in each condition. Examples of the experimental results for each condition are presented in Fig. 21-Fig. 24. Area colors indicate visually determined user states, similar to Section III. The purple points in Fig. 21-Fig. 24 (a) represent the state estimated by each state estimation. The experimental results are comparable to the simulations in Section III. The orange lines in Fig. 21-Fig. 24 (b) depict the height of the armrest. They indicate that the armrest stops moving when the anomaly is detected while the user is rising or descending. The orange lines in Fig. 21-Fig. 24 (c) represent the velocities of the care robot. The black lines indicate whether the brakes of the wheels are on or off. The results indicate that the wheels are braked when the operation state is estimated to be about to fall, abnormal rise, and abnormal descend. Only a few time errors exist; however, we confirm that the robot generally estimates the user state accurately and works properly. Fig. 25 and Fig. 26 present the results obtained from the other trials of the experiments. The robot estimates that the user is sitting and leaning between descending and normal sitting, as shown in Fig. 25 (a). Near the boundary between descending and sitting, the user is actually in a posture similar to sitting and leaning because they are leaning while descending, as mentioned above. The wrong estimation lasts only 0.5 s (5 frames), whereas the leaning estimation starts 2.0 s after sitting down because it is considered that the user  does not stand up immediately after sitting down. Therefore, the care robot works without any problems, as shown in Fig.  25 (b) and (c).
The robot estimates that the user is standing and leaning in the early stage of normal standing, as illustrated in Fig.  26 (a). The user posture is similar to standing and leaning immediately after standing up because the user is leaning while rising. The wrong estimation lasts only 0.1 s, while the robot ascertains the state transition if the same state continues after 0.3 s. Hence, the robot works normally, as shown in Fig.  26 (b) and (c).
The robot operates without any issues in a total of 40 experiments, 10 times, in each condition. These results verify that the proposed method has realized a system that can estimate the user state and provide appropriate assistance. In addition, the proposed system can detect anomalies and stop the robot appropriately.

V. CONCLUSION
In this study, we proposed a support system for the elderly, which can assist standing, walking, and sitting as a sequence of actions based on user state estimation. By adopting the CoG candidates, it was possible to estimate the user state using a small number of sensors. The novel CoG candidate calculation method can be adopted not only for sit-to-stand conditions, but also for stand-to-sit and walking cases by acquiring leg information. In addition, it requires a smaller number of sensors than the number of sensors conventionally required to determine an accurate CoG position. An  anomaly detection method using the CoG candidates was also proposed. The effectiveness of the system was validated via experiments using a care robot that implemented state estimation and corresponding support functions.
In future studies, tests should be conducted with elderly people. In this study, a learning model was created for each user, but we believe that we can create a model for various people without measuring them beforehand by normalizing the model. Currently, the robot stops actuators when the user is in an anomaly, and it can be changed to return to the starting position. In this case, it is considered uncomfortable for the user if the robot automatically performs a return action without any guidance. Therefore, we plan to implement a communication function to confirm whether the user is actually in an anomaly, including a function that returns to the normal state after the confirmation of the user's position. This interaction is also useful for sitting down because it may be uncomfortable for the user if the robot starts assisting sitting down motion when the user is not trying to sit down. By adopting an unsupervised learning method for anomaly detection, a few unexpected anomalies can also be estimated.  Professor in the Department of Mechanical Engineering, Toyohashi University of Technology, Aichi, Japan. He received his BE, ME, and PhD degrees from Tohoku University, Sendai, Japan in 2015, 2017, and 2020, respectively. His research interests include assistive robots, humanrobot cooperation systems, and robot ethics. He is a member of IEEE and JSME. VOLUME 4, 2016