Configuration of the Mckibben Muscles and Action Intention Detection for an Artificial Assistant Suit

The artificial assistant suit is a kind of assistant equipment, which is actuated by Mckibben muscles and can be put on the human upper body to help the upper limbs act. It can detect the action intention of the human’s upper limb automatically and output definite force complying with the intention. The configuration of the Mckibben muscles is introduced in detail. How to detect the action intention by the surface electromyography signal is discussed deeply and the related experiments and their results are given.


Introduction
Human-powered equipment has attracted attention in science fiction movies and related research has been conducted for over ten years. The science fiction has focused on enhancing human physical abilities, especially in the military use, while the research has been mainly included equipment for improving the performance of the human body or implementing rehabilitation. The word "exoskeleton" has to be mentioned, a word used for describing assistant equipment which can be worn on the human body. Another expression "the wearable robot" almost has the same meaning as "exoskeleton". No matter whether exoskeleton system or wearable robot, it must have the following characteristics because it is to be worn on the human body to help him or her act: (i) it can implement most of the actions of the human body; (ii) it can detect the action intention automatically and then outputs the suitable force to help the human body act, just like the power steering of a car; (iii) it should be comfortable when the user wears it. There are some examples of the exoskeleton system or the wearable robot. Hardiman, one of the first full-body-powered exoskeleton systems, was developed in the 1960s by General Electric Company [1,2] . It could be put on the human body to amplify the user's arm strength. It can help the human arm output much more powerful force than usual, but it didn't fit closely to the human body. It weighed 680kg which was very heavy because it was powered hydraulically. Another famous example is the artificial muscle suit developed by Prof. Kobayashi from Tokyo University of Science, Japan [3] . This wearable muscle suit can be put on the human upper body and the two arms to help the arms act, but not many actions can be implemented by it. It is mainly used to help the user's two arms hold the load up and down vertically, but it doesn't detect the action intention of the human arm.
There is a small switch connected with the system of the muscle suit; the user's hand can hold and operate the switch to activate or stop the action of the muscle suit. It weighs 9.2kg which is much lighter than Hardiman because it is actuated by Mckibben muscles, but the user still feels a little weighed down because the structure is complicated and many metal-made components are used.
Based on the above analysis, our lab has been developing an artificial assistant suit actuated by Mckibben muscles, which can be put on the human upper body to help the two arms act. Not only can it implement different actions of the human arm, but also it can detect the action intention of the two arms automatically and output definite force complying with the action intention (mainly including the actions of the shoulders and the elbows). Therefore, it is a kind of action assistant equipment for human arms. For the ordinary person or elderly people, it can be used as equipment to assist their actions. To patients, it can be used as equipment in recuperation, helping them train their arms. For disabled people, it can be used as motive force equipment to help them recover their arm functions. Based on these functions and the characteristics of the artificial assistant suit, it must refer to the following problems: 1) How many pieces of the Mckibben muscles are needed? How do they distribute and connect with the artificial assistant suit? 2) How to detect the action intention of the human arms automatically? It should detect the action intention correctly in real-time. Then it can output the corresponding force complying with the action intention correctly. 3) How will each of the Mckibben muscles contract after the action intention is detected? Of course, the artificial assistant suit can output the required force and the human feels comfortable only when all of the Mckibben muscles contract coordinately. This paper mainly aims to tackle the first two problems.

Configuration of the Mckibben muscles
To help the human arms act correctly, the Mckibben muscle actuators must be configured reasonably. One piece of the Mckibben muscle cannot complete all the actions [4] , because the structure of the human arm is complicated and its actions are varied. Therefore, many pieces of the Mckibben muscles are required and they need to operate coordinately to achieve the required force. Based on the above analysis, we resolve the arm actions into the following basic ones: ①Shoulders rising up and down (including to the front, to the back and to the side), shown in Fig.1(a) and Fig.1(b); ② Shoulders swinging around the vertical axis, shown in Fig.1(c); ③Elbows bending and stretching, shown in Fig.1(d); ④The forearms rotating around the anatomic axis.  Firstly, let's analyse the anatomic structure of the human arm. There are more than 20 muscles used in the actions of both the shoulder joint and the elbow joint in the same arm [5] . All the actions of the shoulder and the elbow can be realized by these muscles' contracting coordinately [6] . To assist the actions of the human arm correctly, first the artificial assistant suit should know the action intention of the user who puts it on. Then it calculates the quantity and the direction of the force, and then outputs the force.
As for the analysis of the motion intention of the human, the traditional way is by analysing the non-biological signals such as the angle, acceleration, force or vision and hearing. All these non-biological signals are not directly related with the motion intention. However, the surface electromyography signal (SEMG) is generated directly by the muscles [7,8] and can therefore express the motion intention more directly and more precisely [9] which is more suitable to be the control signal of the artificial assistant suit. This paper discusses how to detect the motion intention of both the human shoulder joint and the elbow joint by collecting the SEMG signals of the human arm.
We collected data about the SEMG signal related with the given basic actions. Then we established the relationship between the motion intention and the SEMG signal by the classifier or BP neural network. We selected the classifier in our research.

Detecting of the action intention
The motions of the human arm are very complicated.
Here, only the motions of both the shoulder and the elbow are considered. The detecting experiments only refer to the four basic actions and their combinations, which correspond to Fig.1(a), Fig.1(b), Fig.1(c) and Fig.1(d). They are divided into different degrees of freedom, which are degree 1 of freedom -shoulder rising up and down, degree 2 of freedom -shoulder swing around the vertical axis, degree 3 of freedom -elbow bending and stretching.
Five motion states of the arm when the human is standing on the ground needs to be defined before the experiment, shown in Fig.3. State 0 corresponds to when the arm is relaxed. State 1 corresponds to when the elbow is bending to almost its largest angle. State 2 corresponds to when the arm is lifting up frontward to almost level with the shoulder and the elbow doesn't bend. State 3 is when the arm is lifting up backward to the largest angle of the shoulder joint and the elbow doesn't bend. State 4 is when the arm is lifting up sideward to almost level with the shoulder.
Three healthy male master students took part in the experiments. The electroencephalogram apparatus, EEG1100, made by Nihon Kohden Corporation is used, the sampling frequency is set as 100Hz, sensitivity as 100μV, passband as 0.1~200Hz. A few pairs of electrodes are used, each pair located 3cm apart from each other and the distribution is consistent with the muscle fibre, shown in Fig.4. The referential voltage is the average of the voltages of the two earlobes. To get the motion intention of the human arm on every degree of freedom, the SEMG signals of biceps brachii, triceps brachii, deltoid anterior part, deltoid middle part, deltoid posterior part and pectoralis major are detected. The experiments are divided into two stages which are the training stage and test stage. During the training stage, the data of the SEMG signals of the four basic actions are sampled to establish and train the classifier. In the test stages, the classifier is tested to prove whether or not it can forecast the action intention correctly.
During the training stages, the experimenter's arm acts circularly in the following sequence (seeing Fig.3 Fig.5. The classifier is used to forecast the action intention. The motions are recorded by a video camera so that it can be compared with the result calculated by the classifier.

The data process
In the training stages, the action between state 0 and state 1 is actually the motion on the degree 3 of freedom. The actions between state 0 and 2, state 0 and 3, state 0 and 4 actually are the motions on the degree 1 of freedom. The action between state 2 and 4 is actually the motion of the degree 2 of freedom. The task of the training stage is to train the three classifiers by the data of the SEMG signals to judge the motion intention on the three degrees of freedom. In the test stages, the experimenter does the complex actions. The complex action 1 includes the motions on all the three degrees of freedom, and the complex action 2 includes the motions of both degree 1 of freedom and degree 3 of freedom.
To realize the detection to the real-time motion intention of the human arm on the three degrees of freedom, the six pairs of SEMG signals are divided into three groups, shown in Table 1. Every group corresponds to one degree of freedom, which is used to judge the motion intention. Group 1, consisting of the SEMG signals of deltoid anterior part, the middle part and the posterior part, is to judge the motion intention on degree 1 of freedom. Group 2, consisting of the SEMG signals of pectorals major, is to judge the motion intention of the degree 2 of freedom. The group 3, consisting of the SEMG signals of both biceps brachia and triceps brachia, is to judge the motion intention on degree 3 of freedom. The data processing of the three groups of SEMG signals are independent. For example of the group 1, affected by the weight on the vertical direction, the motion intention of the experimenter is arising-up when the arm keeps state 2, 3, 4 and it really rises up.
As shown in Fig.6, the three classifiers are designed and trained. As shown in Fig.7, the data in the test stage are used to check whether or not the classifiers work correctly. The output of the classifiers will be compared with the videos taken by the video camera.

Experiment results
Fig .8 shows the data about the complex action 1. The former six figures show the relative SEMG signals and the latter three figures show the analysis result of the motion intention which is clearly consistent with the real motion intention. Fig.9 gives the data related with the complex action 2 and it shows almost the same situation as Fig.8 The ninth subfigure in Fig.9 shows a problem which is that the analysing result doesn't judge the real action intention. The real action is the complex action 2 includes both the shoulder rising to the side and the elbow bending. But the actual analysing result shows that the action is that the shoulder rises up to the side and then the elbow bends, and then the elbow stretches backward at the earlier stage of the complex 2. The detection result becomes stable at the later stage.

Conclusion
Based on the requirements of the artificial assistant suit, the configuration of the Mckibben muscles is investigated and the detailed detecting algorithm of the action intention by SEMG signals is introduced. The experiment results show that the intention detection of most of the actions is efficient and accurate, and there is only one situation where the right intention information cannot be obtained. This problem must be resolved in the future development. Of course, the research has not yet been completed and the third problem still has to be investigated -this will be emphasized in the future research.

Acknowledgements
This study is supported by the project from Shanghai Committee of Science and Technology, China (Pujiang Plan: 10PJ1409200).