EEG-based BCI system to control prosthesis’s ﬁnger movements

Background: The advances in assistive technologies will go a long way towards restoring the mobility of paralyzed and/or amputated limbs. In this paper, we propose a system that adopts the brain-computer interface (BCI) technology to control prosthetic ﬁngers by thoughts. To predict the movements of each ﬁnger, a complex EEG signal processing algorithms should be applied in order to remove the outliers, to extract feature, to discriminate between the ﬁngers and to control prosthesis’s ﬁnger. The proposed method discriminates between the ﬁve human ﬁngers. So a multi-classiﬁcation problem based on ensemble of one class-classiﬁer is applied where each classiﬁer predicts the intention to move one ﬁnger. At the end, an adapted machine learning strategy is proposed to predict movements of multiple ﬁngers at the same time. Results: The sensitive regions of the brain related to ﬁnger movements are identiﬁed and located. The proposed EEG signal processing chain, based on ensemble of one class-classiﬁer, reach a classiﬁcation accuracy of 81% for ﬁve subjects according to the online approach. Unlike most of the existing prototypes that allow to control only one single ﬁnger and to perform only one movement at a time by the dedicated ﬁnger, our proposed system will enable multiple ﬁngers to perform movements simultaneously. Despite that the proposed system classiﬁes a ﬁve tasks, the obtained accuracy is too high compared to a binary classiﬁcation system. Conclusion: The proposed system contributes to the advancement of a prosthetic allowing people with severe disabilities to do the daily tasks easily. of 83%. Existing EEG BCI systems decode only single ﬁnger movements. We trained diﬀerent models, e.g. SVM, Gaussian, Na¨ıve Bayes, Linear Regression, and LDA. LDA was the classiﬁer which obtained the best results. The system was trained and tested using data recorded from volunteers using a g.tec g.HIamp 80 channel ampliﬁer. These promising results can signiﬁcantly increase the control dimension of EEG-based BCI technologies and potentially facilitate their developments with rich control signals to drive complex applications. Our future work will target the detection of continuous ﬁnger movements. forest Human interaction Magnetoencephalography Electrocorticogram Functional


Background
Brain Computer Interfaces (BCI), are a new type of Human Computer Interaction (HCI) [1], and they have the potential to improve the way people with disabilities control and interact with their applications [2,3]. BCI aim to assist people, with severe disabilities, by creating an alternative path of environmental communication and interaction [4]. BCI are a form of communication between the human brain and a computer. Brain activity leads to changes in electrophysiological signals like the Electroencephalogram (EEG), Magnetoencephalography (MEG), Electrocorticogram (ECoG), Functional Near-Infrared Spectroscopy (fNIRS) and Electromyography (EMG). A BCI system uses these metrics and translates them into artificial control commands in real time.
In 2006, the United Nations (UN) adopted a "Convention on the Rights of Persons with Disabilities" (UNCPRD) that recognizes autonomy and independence of living as basic human rights [5]. In line with this treaty, the system proposed here aims to restore finger mobility in people with severe disabilities. Many people lack the ability to use their fingers including those with amputated fingers and/or hands and those with diseases that impair the neuromuscular channels such as amyotrophic lateral sclerosis (ALS), spinal cord injury and cerebral palsy, etc. Given that such people encounter difficulties when using their arms in general, the system we outline here intends to utilize BCI technology and develop a brain-controlled system that analyses and decodes EEG signals, in order to restore finger movements. This would contribute to the development of brain-controlled next generation prosthesis which would enable people with disabilities to regain the mobility of their fingers.
Existing prosthetics are electric or mechanical, which are controlled using intentional motor activity to restore the mobility of the amputated part. For instance, a prosthetic hand can be controlled with a shoulder using a harness [6,7]. Electric prosthetics are controlled using the residual activity of nerves or muscles in the extremity of the amputated area. All available prosthetic arms/fingers are inefficient and unusable for individuals who are completely paralyzed. The primary goal of this project was to design and implement a classification strategy to predict finger movements using EEG signals. This would help in developing the next generation of prosthesis. signals are defined in time and frequency domains as brain activity that is triggered by muscular contractions or by thinking about specific tasks. Generated rhythms, which are known as sensory motor rhythms (SMR), appear in well-defined locations as well as in specific frequency bands according to the organ responsible for muscle contraction [10]. Brain activity is translated onto an artificial actuator after filtering, extraction and classification.
Advances in BCI contribute to this technology, which provides an alternative method of prosthetic control. In the literature, there are many studies which propose a system to control prosthetics. These systems can be classified according to the method of recording brain activity, i.e. EEG, MEG, ECoG ,fNIRS or EMG. • Multichannel ECoG of individual finger movements was proposed in [19].
The system included a filter unit, common spatial patterns (CSP) and SVM. In order to deal with the multi-class problem, 15 classifiers were used, 10 of them were from all-versus-all classes, the other 5 were from the correlated groups against each other, i.e. the thumb and index against the rest of the fingers and so on. The system achieved an accuracy rate of 86.3%.

fNIRS based systems:
• A fNIRS system for decoding fingers flexion and extension was presented in [20] where the brain activity of three subjects moving the fingers on their right-hand (flex and extend the fingers and thumb) were measured.
The filtering and feature extraction were not implemented due to the small size of the captured signals. An SVM classifier was used and only oxy-hemoglobin signals were considered during the classification. The measured accuracy according to a 10-fold-cross-validation was 62%.

EMG based systems:
•  proposal including five tasks such as thumb, index, middle, ring and pinky.

Results
The main objectives in this section is to demonstrate the efficiency of the proposed method in discriminating between the five finger movements using the user's intention. To this end, the recording trials were applied directly to the signal processing chain without the application of the proposed channel selection algorithm and the filtering technique. In this first approach, the EEG features are extracted using a CSP spatial filter and LDA while maintaining the same data partition as presented in Algorithm 1. The system accuracy fluctuates between subject from 52% to 60%, with an average of 57%. This system accuracy is poor, which makes the system unusable. By keeping the same techniques and integrating the channel selection method, the number of channel used during the acquisition are minimized by identifying the relevant channels and removing the others. Figure 1 represents the identified relevant electrodes for each finger in all subjects. As depicted in Figure 1, despite that we used 64 channels during the acquisition process, more than 70% of the channels were removed as they didn't contain useful information. Furthermore, the most active channels were located in the left hemisphere and in the center of the cortex because the scope of this study was right-hand-finger movements.
The integration of the channel selection and artifacts removal algorithms with the same feature extraction and classification algorithms enhanced the system accuracy significantly, by more than 20% for each subject. The system accuracy obtained overall was 81%. Table 2 presents the accuracy obtained for each subject. Despite the system classifying five finger movements simultaneously, the measured accuracy In fact, by using this approach, we are able to detect multiple finger movements simultaneously, in addition to having high model accuracy.

Discussion
The research was implemented using a relatively large amount of data instances, the recording sessions will hopefully help other BCI researchers further improve on this data. The starting prediction models on raw data ranged from 52% to 59% and this was highly improved with intensive cleaning and a preprocessing phase.
Having an accuracy of 83% proves the quality of our approach. Our pre-processing approach has shown high accuracy results by choosing the relevant channels after comparing them with reference channels, which removed noise from the dataset by excluding unimportant channels. By using this sequence of preprocessing along with the feature extraction approach the prediction model will get a differentiable dataset that can be classified using a machine learning model like LDA. The main advantage of giving every finger its own classifier is allowing the models to predict multiple finger movements at the same time, which has not been done or discussed in previous work. Determining the angle of fingers flexion and distinguishing between different finger movement extracted from both hands could be investigated in future work, along with improving on the accuracy, this could help disabled people to achieve more independence in their daily lives. Table 3 presents the accuracy obtained by the proposed system and the overall results of existing methods, which are validated according to the online and offline approach. The proposed system significantly improves system performance, achieving an average system accuracy of 81% according to the online approach.
The proposed finger decoding system outperforms those of the previous studies, e.g., the average accuracy herein increased by 4% compared to the best previous system presented in [14]. Moreover, the proposed system significantly improves the runtime using a robust and efficient algorithms, contrary to the method presented in [14,11,12,22].

Conclusion
This study aimed to develop a BCI system for disabled people who lack the use of their limbs. The outcomes of this study may contribute to the development of a next generation prosthesis, i.e. brain-controlled prosthetic arm. Such prosthesis are an alternative method for disabled people to restore their mobility. We developed a prediction method that consists of a set of one-class classifiers. The proposed method achieved an accuracy of 83%. Existing EEG BCI systems decode only single finger movements. We trained different models, e.g. SVM, Gaussian, Naïve Bayes, Linear Regression, and LDA. LDA was the classifier which obtained the best results. The system was trained and tested using data recorded from volunteers using a g.tec g.HIamp 80 channel amplifier. These promising results can significantly increase the control dimension of EEG-based BCI technologies and potentially facilitate their developments with rich control signals to drive complex applications. Our future work will target the detection of continuous finger movements.

Methods
Our methodology focuses on the requirements of a multi-class classification problem for a successful BCI system that decodes finger movement using EEG brain activity signals. Figure 2 presents the general structure of the proposed system. We started by creating our own dataset using a g.HIamp from g.tec with an 80 channel amplifier. To do this, volunteers were asked to randomly move their fingers. Every single finger movement was considered as a single trial. EEG signals that corresponded to subject's trials were recorded during different sessions. Every trial was labeled distinctively. Next, the recorded data was processed by removing the artifacts to increase the signal-to-noise ratio (SNR) of the EEG signals. The set of electrodes used were sensitive to finger movements. Next, a Common Spatial Pattern (CSP) algorithm was applied in order to reduce the size of the EEG signal and prepare the features which represented finger movement and meaningful to the classification stage, which was the last unit of the signal processing chain. Finally, an ensemble of one-class classifiers was used to decode finger movement. Every one-class classifier was trained to detect the movements of a given finger. To avoid over-fitting, every classifier was trained on a portion of the data set and tested using another portion.
Accuracy measurement was used to determine the performance of every one-class classifier.

Scenario Program
A Scenario program interface (SPI) was developed to guide the subjects during the recording sessions. It displayed the instructions for the subjects on a screen. Figure   3.(a) depicts the main menu of the SPI and indicates the general information related to each subject, the scenario mode, the ready duration, the flex duration, the waiting duration, and the number of trials in each session. Once the EEG recording process was launched, a specific state machine was followed; which was composed of three phases: • Get ready phase (Figure 3.(b)): during this phase a random finger/limb movement is selected and a corresponding animated picture "gif file" is displayed by the scenario program on the screen.
• Action phase (Figure 3.(c)): during this phase, the subject moves the selected finger/limb.
• Rest phase (3.(d)): during this phase, the subject is in a rest state. The scenario program was designed and developed in a way that the duration of every phase was generic. The duration of each recording sessions were as follows: 2 seconds for the get ready phase, 2 seconds for action phase and 2 seconds for the rest phase. The defined scenario was repeated for every trial that had also been set as a parameter in the program.

Experimental paradigm
The data used in this project was created locally and consisted of EEG signals For every finger, 180 trials were recorded during 3 to 4 different sessions.

EEG signal acquisition
The EEG signals were recorded using a g.tec g.HIamp amplifier. The signals were captured through 64 electrodes placed on the scalp according to the international system localization 10-20. The sampling frequency of the signals was set to 256 Hz and a filtering stage was applied using a band pass filter with a type "Chebyshev" in order to keep the frequency component between 1 Hz and 60 Hz.

Labeling signals
The recording process was done over four sessions in order to minimize subject suffering, where each session was continuous and needed to be discretized into a sequence of six seconds per trial. The first 20 seconds of each session were discarded due to BCI amplifier initialization delay. The EEG signals were subdivided into six second intervals where each epoch corresponded to one finger movement.
Each interval was labeled with a corresponding label from the scenario program.
This process was repeated for every session, we started concatenating the processed sessions, which gave each subject a fully labeled signal.

Terminology and annotations
• An electrode (e) is an electrical conductor used to acquire brain signals. An electrode (e) has 3 main characteristics: label: the name of the electrode.
x and y coordinates which indicate the placement of the electrode on the scalp.
• E is the set of electrodes (e) on a cap.
• A motor imagery (m), also called a motor imagery task, is a mental process by which an individual simulates a given movement action.
• Ψ is a set of motor imagery tasks. The motor imagery tasks which we used here are imaginary thumb, index, middle, ring and pinky fingers movements.
• A trial (t) was a set of brain signals recorded with a set of electrodes E during a given motor imagery task m.
• θ is a set trials. θ= {t i } • ς is the set of subjects from each of which a set of trials was recorded.
• P ow(e, t) is the power of the electrode e calculated from the trial t. • A rest, denoted r, is a set of brain signals recorded using a set of electrodes E during a rest period. In this study it corresponds to the portion of a trial recorded during the 0-1 s period of the trial.
• R is a set of rests.
• P ow(e, R) is defined as the average power of the electrode e during the distinct rest periods of R, according to the expression: • ERD/ERS(e, t) is defined as the percentage of power increase or decrease in the electrode e during the trial t in relation to a reference period R, according to the expression: ERD ERS(e, t) = P ow(e, t) − P ow(e, R) P ow(e, R) (2)

Artifacts removal
After labeling the EEG signals, a filter block was applied to remove artifacts and keep just the frequency components related to the intention of finger movement.
These frequency components were often between 8 Hz and 30 Hz [23]. Thus, a finite impulse response filter was applied with a 4 th order allowing the removal of frequency components outside the band while maintaining a zeros frequency phase for the signal [24]. Subsequently, a common average reference technique was applied allowing the average signal at all electrodes to be calculated and subtracted from the EEG signal at every electrode for every time point. This step allows for the discrimination between positive and negative peaks in the EEG signals and to find signal sources in the noisy environment leading to an improvement in the signal-tonoise ratio [25]. The EEG signals were converted and computed according to the equation 3.
Where |E| is the total number of electrodes used during the recording process of one trial T , and T (k) was the EEG signal at the electrode k. Unlike other systems, the proposed method to integrate a channel selection block allowed for the selection of the most relevant channel for each trial.

Selection of relevant electrodes
After removing the artifacts, a novel channel selection method was proposed in order to decrease the number of electrodes before training the proposed system.
These are the definitions for the trial selectors: • σ: For a given motor imagery task m i , this selector returns the subset of trials that have been recorded during m i . It is defined as following: σ(m i ) = {t j ∈ θ such that t j is recorded during the motor imagery task m i }.
• δ : For a given subject s i , this selector returns the subset of trials that have been recorded during sessions of the subject s i . It is defined as following: δ(S i ) = {t j ∈ θ such that t j is recorded during a session of the subject S i }.
• φ: For a given subject s i and a given motor imagery task m j this selector returns the subset of trials that have been recorded during sessions for the subject s i while performing the motor imagery task m j . It is defined as following: • τ : For a given subject s i , a given motor imagery task m i and an electrode e k this selector returns the subset of trials, that have been recorded during sessions of the subject s i while performing the motor imagery task m i and where the change in power of the electrode e k is significant (exceeds or equals the change in power in a reference electrode). It is defined as following: The following function ρ calculates the probability of a significant change in power of the electrode e k for the subject s i while performing the motor imagery task m j .

Feature Extraction
Once the outliers were removed from the EEG signals and the relevant electrodes selected, features of the EEG signals were extracted using, the famous feature extracting method, common spatial pattern (CSP). The aim of this methods is to extract and to keep significant activity or rhythm and eliminate all redundant EEG signals. More theoretical details about the CSP are presented in [26]. This technique is applied in the context of two classes of problem. One-versus-rest approach is applied in order to prepare the features related to each finger movement. These features represent in reality the most significant energy at α and β bands; which are the most likely to contain significant motor imagery information [23]. Furthermore, the feature extraction method allows to offload the classifier work and facilitate discrimination between classes.

Finger movements classification
The extracted features and the corresponding label were split into two sets where one was assigned for the training session and the other for testing. The presented results are measured according to a 5-fold cross validation approach. The training set for each finger was concatenated with 20% of the other finger's data, but for the testing set, 50% of the other finger's data was added. Each finger had its own classification model, where the signal was copied to each finger's model and each model classified if that signal belonged to a particular finger, this approach made moving multiple fingers at the same time possible as each finger works independently. To resume, Figure 1 resume the data set decomposition. Many classification models were tested, e.g. SVM, Logistic Regression, Gaussian Naive Bayes, and LDA.