Next Article in Journal
Formation and Characterization of Various ZnO/SiO2-Stacked Layers for Flexible Micro-Energy Harvesting Devices
Next Article in Special Issue
Analysis of Behavioral Characteristics of Smartphone Addiction Using Data Mining
Previous Article in Journal
Road Vehicles Surroundings Supervision: Onboard Sensors and Communications
Previous Article in Special Issue
Developing a File System Structure to Solve Healthy Big Data Storage and Archiving Problems Using a Distributed File System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stacked Sparse Autoencoders for EMG-Based Classification of Hand Motions: A Comparative Multi Day Analyses between Surface and Intramuscular EMG

by
Muhammad Zia ur Rehman
1,*,
Syed Omer Gilani
1,
Asim Waris
1,2,
Imran Khan Niazi
1,2,3,
Gregory Slabaugh
4,
Dario Farina
5 and
Ernest Nlandu Kamavuako
6
1
Department of Robotics & Artificial Intelligence, School of Mechanical & Manufacturing Engineering, National University of Sciences & Technology (NUST), Islamabad 44000, Pakistan
2
Center for Sensory-Motor Interaction, Department of Health Science and Technology, Aalborg University, 9200 Aalborg, Denmark
3
Center for Chiropractic Research, New Zealand College of Chiropractic, Auckland 1060, New Zealand
4
Department of Computer Science, City University of London, London EC1V 0HB, UK
5
Department Bioengineering, Imperial College London, London SW7 2AZ, UK
6
Centre for Robotics Research, Department of Informatics, King’s College London, London WC2G 4BG, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(7), 1126; https://doi.org/10.3390/app8071126
Submission received: 12 June 2018 / Revised: 28 June 2018 / Accepted: 9 July 2018 / Published: 11 July 2018
(This article belongs to the Special Issue Deep Learning and Big Data in Healthcare)

Abstract

:
Advances in myoelectric interfaces have increased the use of wearable prosthetics including robotic arms. Although promising results have been achieved with pattern recognition-based control schemes, control robustness requires improvement to increase user acceptance of prosthetic hands. The aim of this study was to quantify the performance of stacked sparse autoencoders (SSAE), an emerging deep learning technique used to improve myoelectric control and to compare multiday surface electromyography (sEMG) and intramuscular (iEMG) recordings. Ten able-bodied and six amputee subjects with average ages of 24.5 and 34.5 years, respectively, were evaluated using offline classification error as the performance matric. Surface and intramuscular EMG were concurrently recorded while each subject performed 11 hand motions. Performance of SSAE was compared with that of linear discriminant analysis (LDA) classifier. Within-day analysis showed that SSAE (1.38 ± 1.38%) outperformed LDA (8.09 ± 4.53%) using both the sEMG and iEMG data from both able-bodied and amputee subjects (p < 0.001). In the between-day analysis, SSAE outperformed LDA (7.19 ± 9.55% vs. 22.25 ± 11.09%) using both sEMG and iEMG data from both able-bodied and amputee subjects. No significant difference in performance was observed for within-day and pairs of days with eight-fold validation when using iEMG and sEMG with SSAE, whereas sEMG outperformed iEMG (p < 0.001) in between-day analysis both with two-fold and seven-fold validation schemes. The results obtained in this study imply that SSAE can significantly improve the performance of pattern recognition-based myoelectric control scheme and has the strength to extract deep information hidden in the EMG data.

1. Introduction

Advances in myoelectric interfaces have the potential to revolutionise the use of wearable prosthetic devices as artificial substitutes for missing limbs. Active hand and arm prostheses are usually controlled by electromyography (EMG) signals. EMG signals can be recorded from remnant muscles using either invasive or non-invasive electrodes. In most commercially available upper limb prostheses, EMG is used for on-off control, which allows the prosthesis to move bi-directionally with constant velocity [1]. With the addition of proportional control, the velocity of a prosthetic function is proportional to the intensity of the EMG signal [2]. These clinical control schemes are based on the direct association (direct control) of EMG signals to the actuation of degrees of freedom (DoF) [1] and therefore require at least two independent EMG channels to drive each DoF. To overcome this limitation, control schemes based on pattern recognition (PR) [3,4,5,6] have been used to decode several prosthesis functions using supervised learning.
A number of algorithms have been tested for EMG classification, including artificial neural networks (ANN) [7,8,9,10,11], log linearized Gaussian mixture networks (LLGMN) [12,13,14,15], Fuzzy mean max NN [16], radial basis function [17], hidden Markov model [18], Bayesian network [16,19], random forest [20], k-nearest neighbors (KNN) [21,22], support vector machine (SVM) [23], and linear discriminant analysis (LDA) [24,25]. Some of these algorithms achieved classification accuracies above 95% for up to 10 classes when applied to temporal and frequency EMG features [26]. PR-based approaches are promising compared to conventional methods. However, their clinical usability is still in its infancy and thus the natural control of prostheses is still limited to a few basic movements [27,28]. This calls for better machine learning methods to improve usability of PR schemes.
The sophisticated deep learning algorithms impact several applied fields, such as computer vision [29] and speech recognition [30]. In addition to classical machine learning methods, deep learning algorithms, such as autoencoders (AE) and convolutional neural networks (CNN), have been used in biomedical signal applications [31], such as in electroencephalography (EEG) [32,33,34] and electrocardiography (ECG) [35,36,37,38]. Despite some applications of EMG processing [39,40,41,42], AE-based methods have not been extensively applied in myoelectric control. Several studies evaluated the CNN in myoelectric control; Park and Lee [43] decoded movement intention from EMG using a CNN that outperformed a SVM classifier. Atzori et al. [28] proposed a deep networks-based algorithm for classification of surface EMG (sEMG) associated to hand movements in the publicly available Ninapro database of intact-limb and amputee subjects [44]. This technique provided comparable performance with respect to KNN, SVM, random forest, and LDA. Other studies [27,45,46,47,48,49] evaluated deep learning methods and concluded that these methods either performed better or comparable to classical machine learning algorithms. Most of these studies were performed with datasets recorded in a single session, limiting the usefulness of deep networks. Hence, performance of deep networks over multiple sessions needs to be assessed with both sEMG and intramuscular (iEMG).
Whereas myoelectric control is most commonly applied with sEMG, iEMG has also been proposed as an approach to overcome some of the limitations of non-invasive systems [50]. For example, Kamavuako et al. [3] showed that the classification accuracy of a myoelectric control system with combined surface and intramuscular EMG was better than sEMG alone. Other studies [51,52,53,54,55] compared the individual performance of sEMG and iEMG for classification of different hand and wrist movements and generally found similar performance (no significant difference). However, all these previous studies reported results on able-bodied individuals in a single recording session. Hence nothing can be said about the performance with amputee subjects or over multiple days.
In this study, we apply stacked sparse autoencoders (SSAE) in a myoelectric control application and its performance is compared with the benchmark LDA that is widely used in myoelectric control research [24]. Moreover, we tested able-bodied as well as amputee subjects over multiple sessions on different days and we compared sEMG and iEMG classification.

2. Materials and Methods

2.1. Subjects

Ten able-bodied (male, mean age ± SD = 24.5 ± 22.02% years) and six transradial amputee subjects (male, three left and three right transradial amputation, mean age ± SD = 34.8 ± 32.7% years) participated in the experiments. One amputee regularly used a body-powered prosthesis, whereas the others did not use any prostheses. The procedures were performed in accordance with the Declaration of Helsinki and approved by the local ethical committee of Riphah International University (approval no.: ref# Riphah/RCRS/REC/000121/20012016). Subjects provided written informed consent prior to the experimental procedures.

2.2. Experimental Procedures

Surface and intramuscular EMG signals were collected concurrently. Six sEMG and six iEMG electrodes were used for able-bodied subjects. Three electrodes were placed on the flexor and three on the extensor muscles. The same number of electrodes were used for three of the amputees. In the other three amputees, it was only possible to use five surface electrodes and three to six intramuscular electrodes. Electrodes placement is shown in Figure 1. Intramuscular EMG signals were filtered with an analog bandpass filter of 100–900 Hz and sampled at 8 kHz. Surface EMG signals were filtered at 10–500 Hz and sampled at 8 kHz [25].
Each subject performed 11 hand motions in each experimental session: hand open, hand close, flex hand, extend hand, pronation, supination, side grip, fine grip, agree and pointer, and rest. Seven experimental sessions separated by 24 h were completed by each subject. For each session, each hand movement was repeated four times with a contraction and relaxation time of 5 s, as shown in Figure 2. Hence, a single session was 400 s long. The sequence of movements was randomized for each session.
After data collection and during offline data analysis, time drifting was found for the onset and offset duration of individual time periods. Therefore, individual time period labeling was performed with a semi-automatic technique using MATLAB 2016a. In this method, onset time was manually chosen using a cursor on the data and corresponding time periods were stored automatically. Each period was reduced to three seconds by removing the first and last second of data to remove transient parts.

2.3. Signal Processing

Surface and intramuscular EMG signals were digitally filtered with a 3rd order Butterworth bandpass filter with bandwidths of 20–500 Hz and 100–900 Hz, respectively, and a 3rd order Butterworth band-stop filter to suppress the 50 Hz powerline noise [41].
Four-time domain features, mean absolute value (MAV), waveform length (WL), zero crossing (ZC), and slope-sign change (SSC) [4], were computed from sEMG and iEMG signals in intervals of 200 ms at increments of 28.5 ms.
The signals were then classified with SSAE (as detailed below) [56,57] and a state-of-the-art LDA from a publicly available myoelectric control library (MECLAB) [24]. LDA was chosen for comparison because it is commonly used in the literature on myoelectric control, and even in online studies [58,59] and the commercial available prosthetic hand COAPT [60]. For the within-day analysis, a five-fold cross validation scheme was used for testing. For the between-days analysis, all pairs of days were compared with an eight-fold validation. Data were randomly divided into eight equal folds. Moreover, the classifiers were trained and tested on separate days with a two-fold cross validation where each day was used for training and testing separately. The data from the seven days were also tested with a seven-fold cross validation, in which six days were used for training and one day for testing with seven repetitions.

2.4. Stacked Sparse Autoencoders

AEs are deep networks trained in an unsupervised fashion to replicate the input at the output [61]. AEs consist of an encoder and a decoder. An encoder maps an input x to a new representation z, which is decoded back at the output to reconstruct the input x′:
Z = h (Wx + b)
X′ = g (Wz + b′)
where h and g are activation functions, W and W′ are weight matrices, and b, b′ are bias vectors for the encoder and decoder, respectively [62]. The error between the input k and the reconstructed input x′ is optimized as follows:
min ( W , b , W , b ) = i = 1 n || x i x i ' || 2
In this work, stacked sparse autoencoders (SSAE) [57] were used with two hidden layers, consisting of 24 and 12 hidden units (Figure 3). For both layers, the logistic sigmoid and linear functions were used for the encoders and decoders, respectively. In SSAE, the output of one AE is fed to the input of another AE [39] and sparsity is encouraged by adding regularization to the cost function [63], which stands for the average output activation of a neuron. An average output activation for a neuron i can be formulated as:
p i ^ = 1 n j = 1 n z i ˙ ( x j )
where i is the ith neuron, n is the total number of training examples, and j is the jth training example. This regulariser is introduced to the cost function using the Kullback-Leibler divergence [64]:
Ω s p a r s i t y = i = 1 d p log ( p p ^ i ) + ( 1 p ) log ( 1 p 1 p ^ i )
where d is the total number of neurons in a layer [65] and p is the desired activation value, called sparsity proportion (SP). An L2 regularization term (L2R) is further added to the cost function to control the weights:
Ω w e i g h t s = 1 2   l L j N i K ( w j i ( l ) ) 2
where L represents the number of hidden layers, N is the total number of observations, and K is the number of features within an observation.
Therefore, by inserting the regularization terms from Equations (5) and (6) into the reconstruction error in Equation (3), the cost function can be formulated as follows:
E = 1 N n = 1 N k = 1 K ( x k n x ^ k n ) 2 m e a n   s q u a r e   e r r o r + λ Ω w e i g h t s L 2 R e g u l a r i z a t i o n + β Ω s p a r s i t y S p a r s i t y R e g u l a r i z a t i o n   ( S R )
The three optimization parameters are λ (coefficient for L2R), which prevents overfitting; β (coefficient for sparsity regularization SR) that controls the sparsity penalty term; and p (SP), which sets the desired level of sparsity [33,66]. Parameter optimization for both layers is shown in Figure 4.
Both the AEs were trained with the scale conjugate gradient descent function [67] using greedy layer-wise training [68]. Finally, the softmax layer was trained in a supervised fashion, then stacked with the sparse AEs as shown in Figure 3 and the network was fine-tuned before final classification.

2.5. Statistical Tests

For performance comparison between SSAE and LDA classifiers, and surface vs. intramuscular EMG-based control schemes, Friedman’s tests with two-way layout were applied. Results of classification are presented as mean error with standard deviation. Statistical p-values less than 0.05 were considered significant.

3. Results

3.1. Parameter Optimization

Different combinations of optimization parameters (L2R, SR, and SP) were explored for both layers, whereas the best parameter values that minimized the mean squared error (MSE) were chosen for corresponding layers. Figure 4 shows the MSE for both layers (maximum epochs of 500) along with different combinations of the three parameters. The chosen parameter values were the same for the two layers (L2R = 0.0001, SR = 0.01, and SP = 0.5).

3.2. SSAE vs. LDA

Classification errors were computed for each day using five-fold cross validation, in which data of an individual day was divided randomly into five equal folds, ensuring that each fold had an equal number of all movements. Results were then averaged over seven days for each subject.
For this analysis, EMG data were arranged in four sets including sEMG and iEMG data of both healthy and amputee subjects. All four sets of data were classified with SSAE and LDA separately and the results are shown in Figure 5 as the average of 10 healthy and six amputee subjects.
SSAE achieved statistically less error rates than LDA for all cases (p < 0.001) (Figure 5).

3.3. sEMG vs. iEMG

The performance resulting from the use of sEMG and iEMG data was compared using four combinations of datasets, including healthy and amputee data classified with SSAE and LDA. The same cross-validation scheme was used as in Section 3.2. Results for this analysis are shown in Figure 6 as the average of the 10 healthy and six amputee subjects.
No significant difference was observed between iEMG and sEMG when using SSAE (p > 0.05), whereas sEMG outperformed iEMG with LDA (p < 0.05).

3.4. Analysis between Pairs of Days

Data from seven days were arranged into 21 unique pairs. Classification errors were calculated for each pair using eight-fold cross validation (eight repetitions for each movement per two days, so each repetition constitutes a separate fold), which were averaged over all pairs for each subject. Results for this analysis are shown in Figure 7 as the average of 10 healthy and six amputee subjects.
SSAE outperformed LDA by 11.93 and 21.59 percentage points for both healthy and transradial amputee subjects, respectively (Figure 6). Furthermore, SSAE achieved error rates similar to those of the corresponding data in the within-day analysis with percentage point differences of 3.55 and 11.26 for healthy and amputee subjects, respectively. Conversely, LDA performance between-days worsened significantly from the within-day analysis by 11.28 and 18.95 percentage points for healthy and amputee subjects, respectively. Mean classification errors for the 10 healthy and six amputee subjects for each pair of days are shown in Table 1 and Table 2, respectively.
From Table 1, sEMG and iEMG classification were similar when using SSAE (percentage point difference of 1.76), whereas classification differed significantly when using LDA. sEMG was superior to iEMG with a percentage point difference 8.37. For transradial amputees, however, sEMG and iEMG were classified with similar accuracy with both SSAE and LDA (Table 2).

3.5. Between-Days Analysis

This analysis was performed with two-fold and seven-fold validation schemes. For two-fold validation, each pair of days were tested so that one day was used for training and the other for testing. The result was calculated as the average of all pairs. For seven-fold validation, six days were used for training and one day for testing with seven repetitions. Results for both schemes are shown in Figure 8 as the average of 10 healthy and six amputee subjects.
SSAE achieved lower error rates than LDA for both the two-fold and seven-fold validation. As the training data increased from one to six days, the error rate of SSAE decreased by 9.50 and 13.03 percentage points for sEMG and iEMG, respectively. The decrease in error rate with increasing training data was smaller for LDA (6.39 and 5.16). Moreover, in both the two-fold and seven-fold analyses, sEMG patterns proved to be more robust and outperformed the iEMG (p < 0.01).

4. Discussion

SSAE performed significantly better than LDA using both sEMG and iEMG data from able-bodied and amputee subjects (Figure 5 and Figure 7, respectively). Moreover, SSAE was more robust to between-day variability in signal features.
The number of hidden units in both layers was optimized and further increments did not significantly reduce the error. Performance at both layers was better when using non-linear and linear activation functions for encoders and decoders, respectively. Beside the architecture, during the training phase, we noted that some parameters of the individual layers played an important role in the error reduction for that layer. Layers were trained in a greedy layer-wise training fashion, in which each layer was trained independently of the others. At layer one, the MSE depended on SR, but was almost independent of L2R. Conversely, at layer two, MSE decreased with lower values of L2R.
The within-day analysis revealed that SSAE performed similarly when applied to iEMG and sEMG. Although both iEMG and sEMG achieved classification errors below 1% for able-bodied subjects, iEMG (σ2 = 0.05) had less variance than sEMG 2 = 0.26) for between-subject data. For LDA, the classification error was greater than with SSAE and differed between sEMG and iEMG, with lower error rates for sEMG. These results are consistent with previous studies that showed similar performance for classifying iEMG and sEMG or slight poorer performance for iEMG [52,53,54,55].
When analysing between pairs of days, the performance of SSAE was similar to that achieved for the within-day analysis. Conversely, LDA performance substantially worsened when analysing different days. Hence, unlike LDA, SSAEs generalized well when adding data from other days.
For between-days analysis with two-fold and seven-fold validation, SSAE outperformed (p < 0.001) the LDA. In this case, sEMG led to better performance than iEMG using both SSAE and LDA and revealed that sEMG patterns are comparatively more repetitive over days than iEMG patterns. This finding could be due to the fact that sEMG (non-invasive recording) represents global information from surrounding muscles, which increases its usability and robustness, unlike iEMG that includes local information from a specific muscle.
Moreover, when increasing the training set, the performance of SSAE relatively improved more than that of LDA (Figure 8).
Previous studies on myocontrol that used deep networks only focused on sEMG [28,43,49]. For sEMG, the results of this study are comparable with those from previous studies where either deep networks performed similar to or better than classical machine learning algorithms. The present study also included iEMG data and showed that deep networks outperform LDA classification when using both sEMG and iEMG with an increased robustness across days.

5. Conclusions

For both the within- and between-pairs-of-days analysis, SSAE significantly outperformed the state-of-the-art LDA classifier with both the sEMG and iEMG data. No difference was found between the sEMG and iEMG data of individual sessions. However, sEMG outperformed the iEMG in between-days analysis. The findings imply that deep networks are more robust across days and the performance improved with adding more sessions to the training data. Furthermore, sEMG patterns proved to be more consistent for long-term assessment compared with iEMG for both healthy and amputee subjects.

Author Contributions

Conceptualization, M.Z.u.R. and E.N.K.; Data curation, M.Z.u.R.; Methodology, A.W. and I.K.N.; Supervision, S.O.G. and E.N.K.; Writing—original draft, M.Z.u.R.; Writing—review & editing, S.O.G., I.K.N., G.S., D.F. and E.N.K.

Funding

This work is supported by National University of Science and Technology, Pakistan.

Acknowledgments

Muhammad Zia ur Rehman would like to acknowledge Higher Education Commission of Pakistan to support the student research visit to Aalborg University, Denmark where this work was done.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Geethanjali, P. Myoelectric control of prosthetic hands: State-of-the-art review. Med. Devices 2016, 9, 247. [Google Scholar] [CrossRef] [PubMed]
  2. Ison, M.; Artemiadis, P. Proportional myoelectric control of robots: Muscle synergy development drives performance enhancement, retainment, and generalization. IEEE Trans. Robot. 2015, 31, 259–268. [Google Scholar] [CrossRef]
  3. Kamavuako, E.N.; Scheme, E.J.; Englehart, K.B. Combined surface and intramuscular emg for improved real-time myoelectric control performance. Biomed. Signal Process. Control 2014, 10, 102–107. [Google Scholar] [CrossRef]
  4. Hudgins, B.; Parker, P.; Scott, R.N. A new strategy for multifunction myoelectric control. IEEE Trans. Biomed. Eng. 1993, 40, 82–94. [Google Scholar] [CrossRef] [PubMed]
  5. Herberts, P.; Almström, C.; Kadefors, R.; Lawrence, P.D. Hand prosthesis control via myoelectric patterns. Acta Orthop. Scand. 1973, 44, 389–409. [Google Scholar] [CrossRef] [PubMed]
  6. Graupe, D.; Cline, W.K. Functional separation of emg signals via arma identification methods for prosthesis control purposes. IEEE Trans. Syst. Man Cybern. 1975, SMC-5, 252–259. [Google Scholar] [CrossRef]
  7. Putnam, W.; Knapp, R.B. Real-time computer control using pattern recognition of the electromyogram. Engineering in Medicine and Biology Society. In Proceedings of the 15th Annual International Conference of the IEEE Engineering in Medicine and Biology Societ, San Diego, CA, USA, 31 October 1993; pp. 1236–1237. [Google Scholar]
  8. Tsenov, G.; Zeghbib, A.; Palis, F.; Shoylev, N.; Mladenov, V. Neural networks for online classification of hand and finger movements using surface emg signals. In Proceedings of the 8th Seminar on Neural Network Applications in Electrical Engineering, Belgrade, Serbia & Montenegro, Serbia, 25–27 September 2006; pp. 167–171. [Google Scholar]
  9. Rosenberg, R. The biofeedback pointer: Emg control of a two dimensional pointer. In Proceedings of the Second International Symposium on Wearable Computers, Digest of Papers, Pittsburgh, PA, USA, 19–20 October 1998; pp. 162–163. [Google Scholar]
  10. Jung, K.K.; Kim, J.W.; Lee, H.K.; Chung, S.B.; Eom, K.H. Emg pattern classification using spectral estimation and neural network. In Proceedings of the SICE Annual Conference, Takamatsu, Japan, 17–20 September 2007; pp. 1108–1111. [Google Scholar]
  11. El-Daydamony, E.M.; El-Gayar, M.; Abou-Chadi, F. A computerized system for semg signals analysis and classifieation. In Proceedings of the Radio Science Conference, Tanta, Egypt, 18–20 March 2008; pp. 1–7. [Google Scholar]
  12. Fukuda, O.; Tsuji, T.; Kaneko, M. An emg controlled pointing device using a neural network. In Proceedings of the IEEE SMC’99 Conference on Systems, Man, and Cybernetics, Tokyo, Japan, 12–15 October 1999; pp. 63–68. [Google Scholar]
  13. Tsuji, T.; Fukuda, O.; Murakami, M.; Kaneko, M. An emg controlled pointing device using a neural network. Trans. Soc. Instrum. Control Eng. 2001, 37, 425–431. [Google Scholar] [CrossRef]
  14. Fukuda, O.; Arita, J.; Tsuji, T. An emg-controlled omnidirectional pointing device using a hmm-based neural network. In Proceedings of the International Joint Conference on Neural Networks, Portland, OR, USA, 20–24 July 2003; pp. 3195–3200. [Google Scholar]
  15. Bu, N.; Hamamoto, T.; Tsuji, T.; Fukuda, O. Fpga implementation of a probabilistic neural network for a bioelectric human interface. In Proceedings of the 47th Midwest Symposium on Circuits and Systems (MWSCAS’04), Hiroshima, Japan, 25–28 July 2004. [Google Scholar]
  16. Kim, J.; Mastnik, S.; André, E. Emg-based hand gesture recognition for realtime biosignal interfacing. In Proceedings of the 13th International Conference on Intelligent User Interfaces, Gran Canaria, Spain, 13–16 January 2008; pp. 30–39. [Google Scholar]
  17. Mobasser, F.; Hashtrudi-Zaad, K. A method for online estimation of human arm dynamics. In Proceedings of the 28th Annual International Conference on Engineering in Medicine and Biology Society, New York City, NY, USA, 31 August–3 September 2006; pp. 2412–2416. [Google Scholar]
  18. Chen, X.; Zhang, X.; Zhao, Z.-Y.; Yang, J.-H.; Lantz, V.; Wang, K.-Q. Hand gesture recognition research based on surface emg sensors and 2d-accelerometers. In Proceedings of the 11th IEEE International Symposium on Wearable Computers, Boston, MA, USA, 11–13 October 2007; pp. 11–14. [Google Scholar]
  19. Jochumsen, M.; Waris, A.; Kamavuako, E.N. The effect of arm position on classification of hand gestures with intramuscular emg. Biomed. Signal Process. Control 2018, 43, 1–8. [Google Scholar] [CrossRef]
  20. Selami, K. Classification of emg signals using decision tree methods. In Proceedings of the 3rd International Symposium on Sustainable Development, Sarajevo, Bosnia and Herzegovina, 31 May–1 June 2012. [Google Scholar]
  21. Geethanjali, P.; Ray, K. Identification of motion from multi-channel emg signals for control of prosthetic hand. Australas. Phys. Eng. Sci. Med. 2011, 34, 419–427. [Google Scholar] [CrossRef] [PubMed]
  22. Waris, A.; Kamavuako, E.N. Effect of threshold values on the combination of emg time domain features: Surface versus intramuscular emg. Biomed. Signal Process. Control 2018, 45, 267–273. [Google Scholar] [CrossRef]
  23. Alkan, A.; Günay, M. Identification of emg signals using discriminant analysis and svm classifier. Expert Syst. Appl. 2012, 39, 44–47. [Google Scholar] [CrossRef]
  24. Chan, A.D.; Green, G.C. Myoelectric control development toolbox. In Proceedings of the 30th Conference of the Canadian Medical & Biological Engineering Society, Lyon, France, 23–26 August 2007; pp. M0100–M0101. [Google Scholar]
  25. Waris, A.; Niazi, I.K.; Jamil, M.; Gilani, O.; Englehart, K.; Jensen, W.; Shafique, M.; Kamavuako, E.N. The effect of time on emg classification of hand motions in able-bodied and transradial amputees. J. Electromyogr. Kinesiol. 2018, 40, 72–80. [Google Scholar] [CrossRef] [PubMed]
  26. Phinyomark, A.; Phukpattaranont, P.; Limsakul, C. Feature reduction and selection for emg signal classification. Expert Syst. Appl. 2012, 39, 7420–7431. [Google Scholar] [CrossRef]
  27. Koch, P.; Phan, H.; Maass, M.; Katzberg, F.; Mertins, A. Early prediction of future hand movements using semg data. In Proceedings of the 39th Annual International Conference of on Engineering in Medicine and Biology Society (EMBC), Seogwipo, South Korea, 11–15 July 2017; pp. 54–57. [Google Scholar]
  28. Atzori, M.; Cognolato, M.; Müller, H. Deep learning with convolutional neural networks: A resource for the control of robotic prosthetic hands via electromyography. Front. Neurorobot. 2016, 10, 9. [Google Scholar] [CrossRef] [PubMed]
  29. Liu, N.; Han, J.; Zhang, D.; Wen, S.; Liu, T. Predicting eye fixations using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 362–370. [Google Scholar]
  30. Chorowski, J.K.; Bahdanau, D.; Serdyuk, D.; Cho, K.; Bengio, Y. Attention-based models for speech recognition. In Advances in Neural Information Processing Systems; Neural Information Processing Systems Foundation, Inc.: San Diego, CA, USA, 2015; pp. 577–585. [Google Scholar]
  31. Min, S.; Lee, B.; Yoon, S. Deep learning in bioinformatics. Brief. Bioinform. 2017, 18, 851–869. [Google Scholar] [CrossRef] [PubMed]
  32. Lin, Q.; Ye, S.-Q.; Huang, X.-M.; Li, S.-Y.; Zhang, M.-Z.; Xue, Y.; Chen, W.-S. Classification of epileptic eeg signals with stacked sparse autoencoder based on deep learning. In Proceedings of the International Conference on Intelligent Computing, Lanzhou, China, 2–5 August 2016; Springer: Berlin, Germany, 2016; pp. 802–810. [Google Scholar]
  33. Tsinalis, O.; Matthews, P.M.; Guo, Y. Automatic sleep stage scoring using time-frequency analysis and stacked sparse autoencoders. Ann. Biomed. Eng. 2016, 44, 1587–1597. [Google Scholar] [CrossRef] [PubMed]
  34. Najdi, S.; Gharbali, A.A.; Fonseca, J.M. Feature transformation based on stacked sparse autoencoders for sleep stage classification. In Proceedings of the Doctoral Conference on Computing, Electrical and Industrial Systems, Costa de Caparica, Portugal, 3–5 May 2017; Springer: Berlin, Germany, 2017; pp. 191–200. [Google Scholar]
  35. Zhang, X.; Dou, H.; Ju, T.; Xu, J.; Zhang, S. Fusing heterogeneous features from stacked sparse autoencoder for histopathological image analysis. IEEE J. Biomed. Health Inf. 2016, 20, 1377–1383. [Google Scholar] [CrossRef] [PubMed]
  36. Yang, J.; Bai, Y.; Lin, F.; Liu, M.; Hou, Z.; Liu, X. A novel electrocardiogram arrhythmia classification method based on stacked sparse auto-encoders and softmax regression. Int. J. Mach. Learn. Cybern. 2017, 9, 1–8. [Google Scholar] [CrossRef]
  37. Yuan, C.; Yan, Y.; Zhou, L.; Bai, J.; Wang, L. Automated atrial fibrillation detection based on deep learning network. In Proceedings of the IEEE International Conference on Information and Automation (ICIA), Ningbo, China, 1–3 August 2016; pp. 1159–1164. [Google Scholar]
  38. Zhou, L.; Yan, Y.; Qin, X.; Yuan, C.; Que, D.; Wang, L. Deep learning-based classification of massive electrocardiography data. In Proceedings of the IEEE Electronic and Automation Control Conference (IMCEC) on Advanced Information Management, Communicates, Xi’an, China, 3–5 October 2016; pp. 780–785. [Google Scholar]
  39. Said, A.B.; Mohamed, A.; Elfouly, T.; Harras, K.; Wang, Z.J. Multimodal deep learning approach for joint eeg-emg data compression and classification. arXiv 2017, preprint. arXiv:1703.08970. [Google Scholar]
  40. Spüler, M.; Irastorza-Landa, N.; Sarasola-Sanz, A.; Ramos-Murguialday, A. Extracting muscle synergy patterns from emg data using autoencoders. In Proceedings of the International Conference on Artificial Neural Networks, Barcelona, Spain, 6–9 September 2016; Springer: Berlin, Germany, 2016; pp. 47–54. [Google Scholar]
  41. Rehman, M.Z.U.; Gilani, S.O.; Waris, A.; Niazi, I.K.; Kamavuako, E.N. A novel approach for classification of hand movements using surface emg signals. In Proceedings of the IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Bilbao, Spain, 18–20 December 2017; pp. 265–269. [Google Scholar]
  42. Vujaklija, I.; Shalchyan, V.; Kamavuako, E.N.; Jiang, N.; Marateb, H.R.; Farina, D. Online mapping of emg signals into kinematics by autoencoding. J. Neuroeng. Rehabil. 2018, 15, 21. [Google Scholar] [CrossRef] [PubMed]
  43. Park, K.-H.; Lee, S.-W. Movement intention decoding based on deep learning for multiuser myoelectric interfaces. In Proceedings of the 4th International Winter Conference on Brain-Computer Interface (BCI), Gangwon Province, South Korea, 22–24 February 2016; pp. 1–2. [Google Scholar]
  44. Atzori, M.; Müller, H. The ninapro database: A resource for semg naturally controlled robotic hand prosthetics. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 7151–7154. [Google Scholar]
  45. Wei, W.; Wong, Y.; Du, Y.; Hu, Y.; Kankanhalli, M.; Geng, W. A multi-stream convolutional neural network for semg-based gesture recognition in muscle-computer interface. Pattern Recognit. Lett. 2017. [Google Scholar] [CrossRef]
  46. Xia, P.; Hu, J.; Peng, Y. Emg-based estimation of limb movement using deep learning with recurrent convolutional neural networks. Artif. Organs 2018, 42, E67–E77. [Google Scholar] [CrossRef] [PubMed]
  47. Geng, W.; Du, Y.; Jin, W.; Wei, W.; Hu, Y.; Li, J. Gesture recognition by instantaneous surface emg images. Sci. Rep. 2016, 6, 36571. [Google Scholar] [CrossRef] [PubMed]
  48. Du, Y.; Jin, W.; Wei, W.; Hu, Y.; Geng, W. Surface emg-based inter-session gesture recognition enhanced by deep domain adaptation. Sensors 2017, 17, 458. [Google Scholar] [CrossRef] [PubMed]
  49. Du, Y.; Wong, Y.; Jin, W.; Wei, W.; Hu, Y.; Kankanhalli, M.; Geng, W. Semi-supervised learning for surface emg-based gesture recognition. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; pp. 1624–1630. [Google Scholar]
  50. Kamavuako, E.N.; Farina, D.; Yoshida, K.; Jensen, W. Relationship between grasping force and features of single-channel intramuscular emg signals. J. Neurosci. Methods 2009, 185, 143–150. [Google Scholar] [CrossRef] [PubMed]
  51. Farrell, T.R. A comparison of the effects of electrode implantation and targeting on pattern classification accuracy for prosthesis control. IEEE Trans. Biomed. Eng. 2008, 55, 2198–2211. [Google Scholar] [CrossRef] [PubMed]
  52. Hargrove, L.J.; Englehart, K.; Hudgins, B. A comparison of surface and intramuscular myoelectric signal classification. IEEE Trans. Biomed. Eng. 2007, 54, 847–853. [Google Scholar] [CrossRef] [PubMed]
  53. Smith, L.H.; Hargrove, L.J. Comparison of surface and intramuscular emg pattern recognition for simultaneous wrist/hand motion classification. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 4223–4226. [Google Scholar]
  54. Kamavuako, E.N.; Rosenvang, J.C.; Horup, R.; Jensen, W.; Farina, D.; Englehart, K.B. Surface versus untargeted intramuscular emg based classification of simultaneous and dynamically changing movements. IEEE Trans. Neural Syst. Rehabil. Eng. 2013, 21, 992–998. [Google Scholar] [CrossRef] [PubMed]
  55. Kamavuako, E.N.; Scheme, E.J.; Englehart, K.B. Wrist torque estimation during simultaneous and continuously changing movements: Surface vs. Untargeted intramuscular emg. J. Neurophysiol. 2013, 109, 2658–2665. [Google Scholar] [CrossRef] [PubMed]
  56. Beale, M.H.; Hagan, M.T.; Demuth, H.B. Neural Network Toolbox™ Reference; MATLAB R2015b; The MathWorks: Natick, MA, USA, 1992. [Google Scholar]
  57. Bengio, Y. Learning deep architectures for ai. Found. Trends® Mach. Learn. 2009, 2, 1–127. [Google Scholar] [CrossRef]
  58. Simon, A.M.; Hargrove, L.J.; Lock, B.A.; Kuiken, T.A. The target achievement control test: Evaluating real-time myoelectric pattern recognition control of a multifunctional upper-limb prosthesis. J. Rehabil. Res. Dev. 2011, 48, 619. [Google Scholar] [CrossRef] [PubMed]
  59. Young, A.J.; Smith, L.H.; Rouse, E.J.; Hargrove, L.J. A comparison of the real-time controllability of pattern recognition to conventional myoelectric control for discrete and simultaneous movements. J. Neuroeng. Rehabil. 2014, 11, 5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Bellingegni, A.D.; Gruppioni, E.; Colazzo, G.; Davalli, A.; Sacchetti, R.; Guglielmelli, E.; Zollo, L. Nlr, mlp, svm, and lda: A comparative analysis on emg data from people with trans-radial amputation. J. Neuroeng. Rehabil. 2017, 14, 82. [Google Scholar] [CrossRef] [PubMed]
  61. Le, Q.V. A Tutorial on Deep Learning Part 2: Autoencoders, Convolutional Neural Networks and Recurrent Neural Networks; Google Inc.: Menlo Park, CA, USA, 2015. [Google Scholar]
  62. Zhuang, F.; Cheng, X.; Pan, S.J.; Yu, W.; He, Q.; Shi, Z. Transfer learning with multiple sources via consensus regularized autoencoders. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Nancy, France, 15–19 September 2014; Springer: Berlin, Germany, 2014; pp. 417–431. [Google Scholar]
  63. Olshausen, B.A.; Field, D.J. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vis. Res. 1997, 37, 3311–3325. [Google Scholar] [CrossRef]
  64. Kullback, S. Information Theory and Statistics; Courier Corporation: Chelmsford, MA, USA, 1997. [Google Scholar]
  65. Ng, A. Deep Learning Tutorial. 2015. Available online: http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders (accessed on 1 June 2017).
  66. Ngiam, J.; Coates, A.; Lahiri, A.; Prochnow, B.; Le, Q.V.; Ng, A.Y. On optimization methods for deep learning. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), Bellevue, WA, USA, 28 June–2 July 2011; pp. 265–272. [Google Scholar]
  67. Møller, M.F. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 1993, 6, 525–533. [Google Scholar] [CrossRef]
  68. Bengio, Y.; Lamblin, P.; Popovici, D.; Larochelle, H. Greedy layer-wise training of deep networks. Adv. Neural Inf. Process. Syst. 2007, 19, 153. [Google Scholar]
Figure 1. Electrodes placement for (a) able-bodied subject and (b) transradial amputee. For intramuscular electromyography (iEMG), six pairs of wire were inserted into the flexor carpi radialis, palmaris longus muscle, flexor digitorum superficialis, extensor carpi radialis longus, extensor digitorum and extensor carpi ulnaris. Six surface EMG (sEMG) electrodes were placed beside wires.
Figure 1. Electrodes placement for (a) able-bodied subject and (b) transradial amputee. For intramuscular electromyography (iEMG), six pairs of wire were inserted into the flexor carpi radialis, palmaris longus muscle, flexor digitorum superficialis, extensor carpi radialis longus, extensor digitorum and extensor carpi ulnaris. Six surface EMG (sEMG) electrodes were placed beside wires.
Applsci 08 01126 g001
Figure 2. Rectified EMG recorded from the six surface electrode systems in a trial of an intact-limb subject. Six iEMG channels were also recorded concurrently (not shown). Eleven movements (including rest) were repeated four times with a contraction and relaxation time of five seconds. A group of four with the same gray level represents the four repetitions of the same movement.
Figure 2. Rectified EMG recorded from the six surface electrode systems in a trial of an intact-limb subject. Six iEMG channels were also recorded concurrently (not shown). Eleven movements (including rest) were repeated four times with a contraction and relaxation time of five seconds. A group of four with the same gray level represents the four repetitions of the same movement.
Applsci 08 01126 g002
Figure 3. Block diagram of stacked sparse autoencoders (SSAE) used in this work. Features at layer 1 were improved by minimizing the error using Equation (3). These improved features were then fed as input to the next AE and again, improved features at layer 2 were fed to the softmax classifier where labels were obtained. All layers were trained independently from each other and were stacked together. Hence, features were learned in an unsupervised fashion, whereas classification was supervised.
Figure 3. Block diagram of stacked sparse autoencoders (SSAE) used in this work. Features at layer 1 were improved by minimizing the error using Equation (3). These improved features were then fed as input to the next AE and again, improved features at layer 2 were fed to the softmax classifier where labels were obtained. All layers were trained independently from each other and were stacked together. Hence, features were learned in an unsupervised fashion, whereas classification was supervised.
Applsci 08 01126 g003
Figure 4. Parameter selection for both layers. As a greedy layer-wise training strategy was adopted, layer two was trained independent of layer one. The same parameter values were varied for both layers and the best values were chosen to optimise the errors at (a) layer one and (b) two.
Figure 4. Parameter selection for both layers. As a greedy layer-wise training strategy was adopted, layer two was trained independent of layer one. The same parameter values were varied for both layers and the best values were chosen to optimise the errors at (a) layer one and (b) two.
Applsci 08 01126 g004
Figure 5. Mean (and SD) classification error for SSAE and linear discriminant analysis (LDA) classifiers for the four sets of EMG data. The diamond symbol indicates the best classifier with statistical significance (p < 0.001).
Figure 5. Mean (and SD) classification error for SSAE and linear discriminant analysis (LDA) classifiers for the four sets of EMG data. The diamond symbol indicates the best classifier with statistical significance (p < 0.001).
Applsci 08 01126 g005
Figure 6. Mean (and SD) classification error obtained with two different classifiers for both kinds data for healthy and amputee subjects. The diamond symbol indicates the best EMG data type, with statistical significance.
Figure 6. Mean (and SD) classification error obtained with two different classifiers for both kinds data for healthy and amputee subjects. The diamond symbol indicates the best EMG data type, with statistical significance.
Applsci 08 01126 g006
Figure 7. Mean (and SD) classification error obtained with SSAE and LDA for both EMG data types from healthy and amputee subjects. The diamond symbol indicates the best classifier with statistical significance (p < 0.001) for each EMG data type of both healthy and amputee subjects.
Figure 7. Mean (and SD) classification error obtained with SSAE and LDA for both EMG data types from healthy and amputee subjects. The diamond symbol indicates the best classifier with statistical significance (p < 0.001) for each EMG data type of both healthy and amputee subjects.
Applsci 08 01126 g007
Figure 8. (a) Two-fold and (b) seven-fold validation. Mean (and SD) classification error obtained with two different classifiers for both kinds data of healthy and amputee subjects. The diamond symbol indicates the best EMG data type, with statistical significance (p < 0.01).
Figure 8. (a) Two-fold and (b) seven-fold validation. Mean (and SD) classification error obtained with two different classifiers for both kinds data of healthy and amputee subjects. The diamond symbol indicates the best EMG data type, with statistical significance (p < 0.01).
Applsci 08 01126 g008
Table 1. Stacked sparse autoencoder (SSAE) vs. linear discriminant analysis (LDA) performance and surface electromyography (sEMG) vs. intramuscular (iEMG) data comparison for healthy subjects. In each matrix, the upper diagonal triangle represents the mean classification errors obtained with SSAE and the lower diagonal triangle represents errors obtained with LDA for corresponding pairs of days.
Table 1. Stacked sparse autoencoder (SSAE) vs. linear discriminant analysis (LDA) performance and surface electromyography (sEMG) vs. intramuscular (iEMG) data comparison for healthy subjects. In each matrix, the upper diagonal triangle represents the mean classification errors obtained with SSAE and the lower diagonal triangle represents errors obtained with LDA for corresponding pairs of days.
Healthy sEMG Data
DayD1D2D3D4D5D6D7
D1-2.293.062.233.092.743.88
D210.28-3.83.153.232.953.65
D310.49.96-3.744.083.574.31
D411.2211.3711.02-3.022.613.45
D513.1612.7812.6811.43-2.533.46
D612.7412.9412.661211.11-2.69
D713.5413.561311.6211.319.95-
Healthy iEMG Data
DayD1D2D3D4D5D6D7
D1-5.686.636.035.75.156.25
D221.69-5.364.754.793.934.54
D323.4217.93-5.195.034.785.24
D421.9619.518.16-5.165.25.33
D524.6821.0320.5218.98-3.053.58
D623.5920.1920.9618.7317.66-3.12
D723.8520.5319.0319.6516.5515.9-
Table 2. SSAE vs. LDA performance and sEMG vs. iEMG data comparison for transradial amputee subjects. In each matrix, the upper diagonal triangle represents the mean classification errors obtained with SSAE and the lower diagonal triangle represents errors obtained with LDA for corresponding pair of days.
Table 2. SSAE vs. LDA performance and sEMG vs. iEMG data comparison for transradial amputee subjects. In each matrix, the upper diagonal triangle represents the mean classification errors obtained with SSAE and the lower diagonal triangle represents errors obtained with LDA for corresponding pair of days.
Amputee sEMG Data
DayD1D2D3D4D5D6D7
D1-14.5616.2314.415.4713.2112.18
D234.75-14.3213.5112.9611.5411.48
D338.1934.03-16.415.8412.7313.31
D432.2731.7734.36-13.4911.8312.11
D536.133.2134.7930.56-10.3410.78
D631.4530.4230.7627.4224.18-8.57
D734.1231.0132.2727.9225.3820.91-
Amputee iEMG Data
DaysD1D2D3D4D5D6D7
D1-15.8417.4118.641616.4615.32
D236.15-14.9616.9913.3913.7613.53
D339.6634.25-16.5413.7613.8313.74
D438.1633.5833.91-15.1115.214.43
D53834.2335.0132.42-11.7912.78
D636.0733.5634.932.5529.47-13.16
D736.7531.9934.1131.1828.9727.72-

Share and Cite

MDPI and ACS Style

Zia ur Rehman, M.; Gilani, S.O.; Waris, A.; Niazi, I.K.; Slabaugh, G.; Farina, D.; Kamavuako, E.N. Stacked Sparse Autoencoders for EMG-Based Classification of Hand Motions: A Comparative Multi Day Analyses between Surface and Intramuscular EMG. Appl. Sci. 2018, 8, 1126. https://doi.org/10.3390/app8071126

AMA Style

Zia ur Rehman M, Gilani SO, Waris A, Niazi IK, Slabaugh G, Farina D, Kamavuako EN. Stacked Sparse Autoencoders for EMG-Based Classification of Hand Motions: A Comparative Multi Day Analyses between Surface and Intramuscular EMG. Applied Sciences. 2018; 8(7):1126. https://doi.org/10.3390/app8071126

Chicago/Turabian Style

Zia ur Rehman, Muhammad, Syed Omer Gilani, Asim Waris, Imran Khan Niazi, Gregory Slabaugh, Dario Farina, and Ernest Nlandu Kamavuako. 2018. "Stacked Sparse Autoencoders for EMG-Based Classification of Hand Motions: A Comparative Multi Day Analyses between Surface and Intramuscular EMG" Applied Sciences 8, no. 7: 1126. https://doi.org/10.3390/app8071126

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop